From 9dacf0764b506a67f33dd63bdf48fc273c0bdb7f Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:46 +0100 Subject: [PATCH 01/44] target/arm: Note that we handle VMOVL as a special case of VSHLL Although the architecture doesn't define it as an alias, VMOVL (vector move long) is encoded as a VSHLL with a zero shift. Add a comment in the decode file noting that we handle VMOVL as part of VSHLL. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve.decode | 2 ++ 1 file changed, 2 insertions(+) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 595d97568e..fa9d921f93 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -364,6 +364,8 @@ VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_h VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w # VSHLL T1 encoding; the T2 VSHLL encoding is elsewhere in this file +# Note that VMOVL is encoded as "VSHLL with a zero shift count"; we +# implement it that way rather than special-casing it in the decode. VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_b VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_h From aa29190826f2f061ed3ffad0a6cabb30eaf7f8f0 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:47 +0100 Subject: [PATCH 02/44] target/arm: Print MVE VPR in CPU dumps Include the MVE VPR register value in the CPU dumps produced by arm_cpu_dump_state() if we are printing FPU information. This makes it easier to interpret debug logs when predication is active. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/cpu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 2866dd7658..a82e39dd97 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -1017,6 +1017,9 @@ static void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags) i, v); } qemu_fprintf(f, "FPSCR: %08x\n", vfp_get_fpscr(env)); + if (cpu_isar_feature(aa32_mve, cpu)) { + qemu_fprintf(f, "VPR: %08x\n", env->v7m.vpr); + } } } From c88ff88498ea95e78d5fbd192de5123c1d88f9a8 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:47 +0100 Subject: [PATCH 03/44] target/arm: Fix MVE VSLI by 0 and VSRI by
In the MVE shift-and-insert insns, we special case VSLI by 0 and VSRI by
. VSRI by
means "don't update the destination", which is what we've implemented. However VSLI by 0 is "set destination to the input", so we don't want to use the same special-casing that we do for VSRI by
. Since the generic logic gives the right answer for a shift by 0, just use that. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve_helper.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index db5d622085..f14fa914b6 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1279,11 +1279,12 @@ DO_2SHIFT_S(vrshli_s, DO_VRSHLS) uint16_t mask; \ uint64_t shiftmask; \ unsigned e; \ - if (shift == 0 || shift == ESIZE * 8) { \ + if (shift == ESIZE * 8) { \ /* \ - * Only VSLI can shift by 0; only VSRI can shift by
. \ - * The generic logic would give the right answer for 0 but \ - * fails for
. \ + * Only VSRI can shift by
; it should mean "don't \ + * update the destination". The generic logic can't handle \ + * this because it would try to shift by an out-of-range \ + * amount, so special case it here. \ */ \ goto done; \ } \ From ed5a59d61f1619f4015a7a02f72e3590528008b4 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:47 +0100 Subject: [PATCH 04/44] target/arm: Fix signed VADDV A cut-and-paste error meant we handled signed VADDV like unsigned VADDV; fix the type used. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve_helper.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index f14fa914b6..82151b0620 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1182,9 +1182,9 @@ DO_LDAVH(vrmlsldavhxsw, int32_t, int64_t, true, true) return ra; \ } \ -DO_VADDV(vaddvsb, 1, uint8_t) -DO_VADDV(vaddvsh, 2, uint16_t) -DO_VADDV(vaddvsw, 4, uint32_t) +DO_VADDV(vaddvsb, 1, int8_t) +DO_VADDV(vaddvsh, 2, int16_t) +DO_VADDV(vaddvsw, 4, int32_t) DO_VADDV(vaddvub, 1, uint8_t) DO_VADDV(vaddvuh, 2, uint16_t) DO_VADDV(vaddvuw, 4, uint32_t) From a5e59e8dcbbf7b1205370b3f4519749df5d0b726 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:47 +0100 Subject: [PATCH 05/44] target/arm: Fix mask handling for MVE narrowing operations In the MVE helpers for the narrowing operations (DO_VSHRN and DO_VSHRN_SAT) we were using the wrong bits of the predicate mask for the 'top' versions of the insn. This is because the loop works over the double-sized input elements and shifts the predicate mask by that many bits each time, but when we write out the half-sized output we must look at the mask bits for whichever half of the element we are writing to. Correct this by shifting the whole mask right by ESIZE bits for the 'top' insns. This allows us also to simplify the saturation bit checking (where we had noticed that we needed to look at a different mask bit for the 'top' insn.) Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve_helper.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 82151b0620..847ef5156a 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1358,6 +1358,7 @@ DO_VSHLL_ALL(vshllt, true) TYPE *d = vd; \ uint16_t mask = mve_element_mask(env); \ unsigned le; \ + mask >>= ESIZE * TOP; \ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \ TYPE r = FN(m[H##LESIZE(le)], shift); \ mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \ @@ -1419,11 +1420,12 @@ static inline int32_t do_sat_bhs(int64_t val, int64_t min, int64_t max, uint16_t mask = mve_element_mask(env); \ bool qc = false; \ unsigned le; \ + mask >>= ESIZE * TOP; \ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \ bool sat = false; \ TYPE r = FN(m[H##LESIZE(le)], shift, &sat); \ mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \ - qc |= sat && (mask & 1 << (TOP * ESIZE)); \ + qc |= sat & mask & 1; \ } \ if (qc) { \ env->vfp.qc[0] = qc; \ From 95351aa76c8f68564c4be547c1d19d9cabffc147 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:48 +0100 Subject: [PATCH 06/44] target/arm: Fix 48-bit saturating shifts In do_sqrshl48_d() and do_uqrshl48_d() we got some of the edge cases wrong and failed to saturate correctly: (1) In do_sqrshl48_d() we used the same code that do_shrshl_bhs() does to obtain the saturated most-negative and most-positive 48-bit signed values for the large-shift-left case. This gives (1 << 47) for saturate-to-most-negative, but we weren't sign-extending this value to the 64-bit output as the pseudocode requires. (2) For left shifts by less than 48, we copied the "8/16 bit" code from do_sqrshl_bhs() and do_uqrshl_bhs(). This doesn't do the right thing because it assumes the C type we're working with is at least twice the number of bits we're saturating to (so that a shift left by bits-1 can't shift anything off the top of the value). This isn't true for bits == 48, so we would incorrectly return 0 rather than the most-positive value for situations like "shift (1 << 44) right by 20". Instead check for saturation by doing the shift and signextend and then testing whether shifting back left again gives the original value. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve_helper.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 847ef5156a..5730b48f35 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1576,9 +1576,8 @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift, } return src >> -shift; } else if (shift < 48) { - int64_t val = src << shift; - int64_t extval = sextract64(val, 0, 48); - if (!sat || val == extval) { + int64_t extval = sextract64(src << shift, 0, 48); + if (!sat || src == (extval >> shift)) { return extval; } } else if (!sat || src == 0) { @@ -1586,7 +1585,7 @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift, } *sat = 1; - return (1ULL << 47) - (src >= 0); + return src >= 0 ? MAKE_64BIT_MASK(0, 47) : MAKE_64BIT_MASK(47, 17); } /* Operate on 64-bit values, but saturate at 48 bits */ @@ -1609,9 +1608,8 @@ static inline uint64_t do_uqrshl48_d(uint64_t src, int64_t shift, return extval; } } else if (shift < 48) { - uint64_t val = src << shift; - uint64_t extval = extract64(val, 0, 48); - if (!sat || val == extval) { + uint64_t extval = extract64(src << shift, 0, 48); + if (!sat || src == (extval >> shift)) { return extval; } } else if (!sat || src == 0) { From fdcf2269c4e0e4f5ca3a389290a71d7aa98bd5c7 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:48 +0100 Subject: [PATCH 07/44] target/arm: Fix MVE 48-bit SQRSHRL for small right shifts We got an edge case wrong in the 48-bit SQRSHRL implementation: if the shift is to the right, although it always makes the result smaller than the input value it might not be within the 48-bit range the result is supposed to be if the input had some bits in [63..48] set and the shift didn't bring all of those within the [47..0] range. Handle this similarly to the way we already do for this case in do_uqrshl48_d(): extend the calculated result from 48 bits, and return that if not saturating or if it doesn't change the result; otherwise fall through to return a saturated value. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve_helper.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 5730b48f35..1a4b2ef807 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1563,6 +1563,8 @@ uint64_t HELPER(mve_uqrshll)(CPUARMState *env, uint64_t n, uint32_t shift) static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift, bool round, uint32_t *sat) { + int64_t val, extval; + if (shift <= -48) { /* Rounding the sign bit always produces 0. */ if (round) { @@ -1572,9 +1574,14 @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift, } else if (shift < 0) { if (round) { src >>= -shift - 1; - return (src >> 1) + (src & 1); + val = (src >> 1) + (src & 1); + } else { + val = src >> -shift; + } + extval = sextract64(val, 0, 48); + if (!sat || val == extval) { + return extval; } - return src >> -shift; } else if (shift < 48) { int64_t extval = sextract64(src << shift, 0, 48); if (!sat || src == (extval >> shift)) { From 3f4f1880c245453b75d4c09049845b19de9964bf Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:48 +0100 Subject: [PATCH 08/44] target/arm: Fix calculation of LTP mask when LR is 0 In mve_element_mask(), we calculate a mask for tail predication which should have a number of 1 bits based on the value of LR. However, our MAKE_64BIT_MASK() macro has undefined behaviour when passed a zero length. Special case this to give the all-zeroes mask we require. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve_helper.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 1a4b2ef807..bc67b86e70 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -64,7 +64,8 @@ static uint16_t mve_element_mask(CPUARMState *env) */ int masklen = env->regs[14] << env->v7m.ltpsize; assert(masklen <= 16); - mask &= MAKE_64BIT_MASK(0, masklen); + uint16_t ltpmask = masklen ? MAKE_64BIT_MASK(0, masklen) : 0; + mask &= ltpmask; } if ((env->condexec_bits & 0xf) == 0) { From e0d40070e1b3b6cf16ad2a51e85fb92261363d2a Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:49 +0100 Subject: [PATCH 09/44] target/arm: Factor out mve_eci_mask() In some situations we need a mask telling us which parts of the vector correspond to beats that are not being executed because of ECI, separately from the combined "which bytes are predicated away" mask. Factor this mask calculation out of mve_element_mask() into its own function. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve_helper.c | 58 ++++++++++++++++++++++++----------------- 1 file changed, 34 insertions(+), 24 deletions(-) diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index bc67b86e70..ffff280726 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -26,6 +26,35 @@ #include "exec/exec-all.h" #include "tcg/tcg.h" +static uint16_t mve_eci_mask(CPUARMState *env) +{ + /* + * Return the mask of which elements in the MVE vector correspond + * to beats being executed. The mask has 1 bits for executed lanes + * and 0 bits where ECI says this beat was already executed. + */ + int eci; + + if ((env->condexec_bits & 0xf) != 0) { + return 0xffff; + } + + eci = env->condexec_bits >> 4; + switch (eci) { + case ECI_NONE: + return 0xffff; + case ECI_A0: + return 0xfff0; + case ECI_A0A1: + return 0xff00; + case ECI_A0A1A2: + case ECI_A0A1A2B0: + return 0xf000; + default: + g_assert_not_reached(); + } +} + static uint16_t mve_element_mask(CPUARMState *env) { /* @@ -68,30 +97,11 @@ static uint16_t mve_element_mask(CPUARMState *env) mask &= ltpmask; } - if ((env->condexec_bits & 0xf) == 0) { - /* - * ECI bits indicate which beats are already executed; - * we handle this by effectively predicating them out. - */ - int eci = env->condexec_bits >> 4; - switch (eci) { - case ECI_NONE: - break; - case ECI_A0: - mask &= 0xfff0; - break; - case ECI_A0A1: - mask &= 0xff00; - break; - case ECI_A0A1A2: - case ECI_A0A1A2B0: - mask &= 0xf000; - break; - default: - g_assert_not_reached(); - } - } - + /* + * ECI bits indicate which beats are already executed; + * we handle this by effectively predicating them out. + */ + mask &= mve_eci_mask(env); return mask; } From e3152d02da21ac6e2169b1bf104a2d0478664a4a Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:49 +0100 Subject: [PATCH 10/44] target/arm: Fix VPT advance when ECI is non-zero We were not paying attention to the ECI state when advancing the VPT state. Architecturally, VPT state advance happens for every beat (see the pseudocode VPTAdvance()), so on every beat the 4 bits of VPR.P0 corresponding to the current beat are inverted if required, and at the end of beats 1 and 3 the VPR MASK fields are updated. This means that if the ECI state says we should not be executing all 4 beats then we need to skip some of the updating of the VPR that we currently do in mve_advance_vpt(). Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve_helper.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index ffff280726..bc89ce94d5 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -110,6 +110,8 @@ static void mve_advance_vpt(CPUARMState *env) /* Advance the VPT and ECI state if necessary */ uint32_t vpr = env->v7m.vpr; unsigned mask01, mask23; + uint16_t inv_mask; + uint16_t eci_mask = mve_eci_mask(env); if ((env->condexec_bits & 0xf) == 0) { env->condexec_bits = (env->condexec_bits == (ECI_A0A1A2B0 << 4)) ? @@ -121,17 +123,25 @@ static void mve_advance_vpt(CPUARMState *env) return; } + /* Invert P0 bits if needed, but only for beats we actually executed */ mask01 = FIELD_EX32(vpr, V7M_VPR, MASK01); mask23 = FIELD_EX32(vpr, V7M_VPR, MASK23); - if (mask01 > 8) { - /* high bit set, but not 0b1000: invert the relevant half of P0 */ - vpr ^= 0xff; + /* Start by assuming we invert all bits corresponding to executed beats */ + inv_mask = eci_mask; + if (mask01 <= 8) { + /* MASK01 says don't invert low half of P0 */ + inv_mask &= ~0xff; } - if (mask23 > 8) { - /* high bit set, but not 0b1000: invert the relevant half of P0 */ - vpr ^= 0xff00; + if (mask23 <= 8) { + /* MASK23 says don't invert high half of P0 */ + inv_mask &= ~0xff00; } - vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1); + vpr ^= inv_mask; + /* Only update MASK01 if beat 1 executed */ + if (eci_mask & 0xf0) { + vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1); + } + /* Beat 3 always executes, so update MASK23 */ vpr = FIELD_DP32(vpr, V7M_VPR, MASK23, mask23 << 1); env->v7m.vpr = vpr; } From 41704cc262d6f451470c2074560bc7309064865d Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:49 +0100 Subject: [PATCH 11/44] target/arm: Fix VLDRB/H/W for predicated elements For vector loads, predicated elements are zeroed, instead of retaining their previous values (as happens for most data processing operations). This means we need to distinguish "beat not executed due to ECI" (don't touch destination element) from "beat executed but predicated out" (zero destination element). Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve_helper.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index bc89ce94d5..be8b954531 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -146,12 +146,13 @@ static void mve_advance_vpt(CPUARMState *env) env->v7m.vpr = vpr; } - +/* For loads, predicated lanes are zeroed instead of keeping their old values */ #define DO_VLDR(OP, MSIZE, LDTYPE, ESIZE, TYPE) \ void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr) \ { \ TYPE *d = vd; \ uint16_t mask = mve_element_mask(env); \ + uint16_t eci_mask = mve_eci_mask(env); \ unsigned b, e; \ /* \ * R_SXTM allows the dest reg to become UNKNOWN for abandoned \ @@ -159,8 +160,9 @@ static void mve_advance_vpt(CPUARMState *env) * then take an exception. \ */ \ for (b = 0, e = 0; b < 16; b += ESIZE, e++) { \ - if (mask & (1 << b)) { \ - d[H##ESIZE(e)] = cpu_##LDTYPE##_data_ra(env, addr, GETPC()); \ + if (eci_mask & (1 << b)) { \ + d[H##ESIZE(e)] = (mask & (1 << b)) ? \ + cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \ } \ addr += MSIZE; \ } \ From c1bd78cb06afb37e4043d2b0db000abfecab5fe4 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:50 +0100 Subject: [PATCH 12/44] target/arm: Implement MVE VMULL (polynomial) Implement the MVE VMULL (polynomial) insn. Unlike Neon, this comes in two flavours: 8x8->16 and a 16x16->32. Also unlike Neon, the inputs are in either the low or the high half of each double-width element. The assembler for this insn indicates the size with "P8" or "P16", encoded into bit 28 as size = 0 or 1. We choose to follow the same encoding as VQDMULL and decode this into a->size as MO_16 or MO_32 indicating the size of the result elements. This then carries through to the helper function names where it then matches up with the existing pmull_h() which does an 8x8->16 operation and a new pmull_w() which does the 16x16->32. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 5 +++++ target/arm/mve.decode | 14 ++++++++++---- target/arm/mve_helper.c | 16 ++++++++++++++++ target/arm/translate-mve.c | 28 ++++++++++++++++++++++++++++ target/arm/vec_helper.c | 14 +++++++++++++- target/arm/vec_internal.h | 11 +++++++++++ 6 files changed, 83 insertions(+), 5 deletions(-) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 56e40844ad..84adfb2151 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -145,6 +145,11 @@ DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) +DEF_HELPER_FLAGS_4(mve_vmullpbh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) +DEF_HELPER_FLAGS_4(mve_vmullpth, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) +DEF_HELPER_FLAGS_4(mve_vmullpbw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) +DEF_HELPER_FLAGS_4(mve_vmullptw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) + DEF_HELPER_FLAGS_4(mve_vqdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vqdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vqdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index fa9d921f93..de079ec517 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -173,10 +173,16 @@ VHADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op VHSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op VHSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op -VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op -VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op -VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op -VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op +{ + VMULLP_B 111 . 1110 0 . 11 ... 1 ... 0 1110 . 0 . 0 ... 0 @2op_sz28 + VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op + VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op +} +{ + VMULLP_T 111 . 1110 0 . 11 ... 1 ... 1 1110 . 0 . 0 ... 0 @2op_sz28 + VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op + VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op +} VQDMULH 1110 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op VQRDMULH 1111 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index be8b954531..91fb346d7e 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -481,6 +481,22 @@ DO_2OP_L(vmulltub, 1, 1, uint8_t, 2, uint16_t, DO_MUL) DO_2OP_L(vmulltuh, 1, 2, uint16_t, 4, uint32_t, DO_MUL) DO_2OP_L(vmulltuw, 1, 4, uint32_t, 8, uint64_t, DO_MUL) +/* + * Polynomial multiply. We can always do this generating 64 bits + * of the result at a time, so we don't need to use DO_2OP_L. + */ +#define VMULLPH_MASK 0x00ff00ff00ff00ffULL +#define VMULLPW_MASK 0x0000ffff0000ffffULL +#define DO_VMULLPBH(N, M) pmull_h((N) & VMULLPH_MASK, (M) & VMULLPH_MASK) +#define DO_VMULLPTH(N, M) DO_VMULLPBH((N) >> 8, (M) >> 8) +#define DO_VMULLPBW(N, M) pmull_w((N) & VMULLPW_MASK, (M) & VMULLPW_MASK) +#define DO_VMULLPTW(N, M) DO_VMULLPBW((N) >> 16, (M) >> 16) + +DO_2OP(vmullpbh, 8, uint64_t, DO_VMULLPBH) +DO_2OP(vmullpth, 8, uint64_t, DO_VMULLPTH) +DO_2OP(vmullpbw, 8, uint64_t, DO_VMULLPBW) +DO_2OP(vmullptw, 8, uint64_t, DO_VMULLPTW) + /* * Because the computation type is at least twice as large as required, * these work for both signed and unsigned source types. diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index a2a45036a0..d318f34b2b 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -464,6 +464,34 @@ static bool trans_VQDMULLT(DisasContext *s, arg_2op *a) return do_2op(s, a, fns[a->size]); } +static bool trans_VMULLP_B(DisasContext *s, arg_2op *a) +{ + /* + * Note that a->size indicates the output size, ie VMULL.P8 + * is the 8x8->16 operation and a->size is MO_16; VMULL.P16 + * is the 16x16->32 operation and a->size is MO_32. + */ + static MVEGenTwoOpFn * const fns[] = { + NULL, + gen_helper_mve_vmullpbh, + gen_helper_mve_vmullpbw, + NULL, + }; + return do_2op(s, a, fns[a->size]); +} + +static bool trans_VMULLP_T(DisasContext *s, arg_2op *a) +{ + /* a->size is as for trans_VMULLP_B */ + static MVEGenTwoOpFn * const fns[] = { + NULL, + gen_helper_mve_vmullpth, + gen_helper_mve_vmullptw, + NULL, + }; + return do_2op(s, a, fns[a->size]); +} + /* * VADC and VSBC: these perform an add-with-carry or subtract-with-carry * of the 32-bit elements in each lane of the input vectors, where the diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c index 034f6b84f7..17fb158362 100644 --- a/target/arm/vec_helper.c +++ b/target/arm/vec_helper.c @@ -2028,11 +2028,23 @@ static uint64_t expand_byte_to_half(uint64_t x) | ((x & 0xff000000) << 24); } -static uint64_t pmull_h(uint64_t op1, uint64_t op2) +uint64_t pmull_w(uint64_t op1, uint64_t op2) { uint64_t result = 0; int i; + for (i = 0; i < 16; ++i) { + uint64_t mask = (op1 & 0x0000000100000001ull) * 0xffffffff; + result ^= op2 & mask; + op1 >>= 1; + op2 <<= 1; + } + return result; +} +uint64_t pmull_h(uint64_t op1, uint64_t op2) +{ + uint64_t result = 0; + int i; for (i = 0; i < 8; ++i) { uint64_t mask = (op1 & 0x0001000100010001ull) * 0xffff; result ^= op2 & mask; diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h index 865d213944..2a33558290 100644 --- a/target/arm/vec_internal.h +++ b/target/arm/vec_internal.h @@ -206,4 +206,15 @@ int16_t do_sqrdmlah_h(int16_t, int16_t, int16_t, bool, bool, uint32_t *); int32_t do_sqrdmlah_s(int32_t, int32_t, int32_t, bool, bool, uint32_t *); int64_t do_sqrdmlah_d(int64_t, int64_t, int64_t, bool, bool); +/* + * 8 x 8 -> 16 vector polynomial multiply where the inputs are + * in the low 8 bits of each 16-bit element +*/ +uint64_t pmull_h(uint64_t op1, uint64_t op2); +/* + * 16 x 16 -> 32 vector polynomial multiply where the inputs are + * in the low 16 bits of each 32-bit element + */ +uint64_t pmull_w(uint64_t op1, uint64_t op2); + #endif /* TARGET_ARM_VEC_INTERNALS_H */ From 395b92d50ee2b62b662d5524a61c532a2752336c Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:50 +0100 Subject: [PATCH 13/44] target/arm: Implement MVE incrementing/decrementing dup insns Implement the MVE incrementing/decrementing dup insns VIDUP, VDDUP, VIWDUP and VDWDUP. These fill the elements of a vector with successively incrementing values, starting at the offset specified in a general purpose register. The final value of the offset is written back to this register. The wrapping variants take a second general purpose register which specifies the point where the count should wrap back to 0. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 12 ++++ target/arm/mve.decode | 25 ++++++++ target/arm/mve_helper.c | 63 +++++++++++++++++++ target/arm/translate-mve.c | 120 +++++++++++++++++++++++++++++++++++++ 4 files changed, 220 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 84adfb2151..b9af03cc03 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -35,6 +35,18 @@ DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32) DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32) +DEF_HELPER_FLAGS_4(mve_viduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32) +DEF_HELPER_FLAGS_4(mve_vidupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32) + +DEF_HELPER_FLAGS_5(mve_viwdupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32) +DEF_HELPER_FLAGS_5(mve_viwduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32) +DEF_HELPER_FLAGS_5(mve_viwdupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32) + +DEF_HELPER_FLAGS_5(mve_vdwdupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32) +DEF_HELPER_FLAGS_5(mve_vdwduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32) +DEF_HELPER_FLAGS_5(mve_vdwdupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32) + DEF_HELPER_FLAGS_3(mve_vclsb, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vclsh, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index de079ec517..88c9c18ebf 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -35,6 +35,8 @@ &2scalar qd qn rm size &1imm qd imm cmode op &2shift qd qm shift size +&vidup qd rn size imm +&viwdup qd rn rm size imm @vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0 # Note that both Rn and Qd are 3 bits only (no D bit) @@ -259,6 +261,29 @@ VDUP 1110 1110 1 1 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=0 VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 1 1 0000 @vdup size=1 VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2 +# Incrementing and decrementing dup + +# VIDUP, VDDUP format immediate: 1 << (immh:imml) +%imm_vidup 7:1 0:1 !function=vidup_imm + +# VIDUP, VDDUP registers: Rm bits [3:1] from insn, bit 0 is 1; +# Rn bits [3:1] from insn, bit 0 is 0 +%vidup_rm 1:3 !function=times_2_plus_1 +%vidup_rn 17:3 !function=times_2 + +@vidup .... .... . . size:2 .... .... .... .... .... \ + qd=%qd imm=%imm_vidup rn=%vidup_rn &vidup +@viwdup .... .... . . size:2 .... .... .... .... .... \ + qd=%qd imm=%imm_vidup rm=%vidup_rm rn=%vidup_rn &viwdup +{ + VIDUP 1110 1110 0 . .. ... 1 ... 0 1111 . 110 111 . @vidup + VIWDUP 1110 1110 0 . .. ... 1 ... 0 1111 . 110 ... . @viwdup +} +{ + VDDUP 1110 1110 0 . .. ... 1 ... 1 1111 . 110 111 . @vidup + VDWDUP 1110 1110 0 . .. ... 1 ... 1 1111 . 110 ... . @viwdup +} + # multiply-add long dual accumulate # rdahi: bits [3:1] from insn, bit 0 is 1 # rdalo: bits [3:1] from insn, bit 0 is 0 diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 91fb346d7e..38b4181db2 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1695,3 +1695,66 @@ uint32_t HELPER(mve_sqrshr)(CPUARMState *env, uint32_t n, uint32_t shift) { return do_sqrshl_bhs(n, -(int8_t)shift, 32, true, &env->QF); } + +#define DO_VIDUP(OP, ESIZE, TYPE, FN) \ + uint32_t HELPER(mve_##OP)(CPUARMState *env, void *vd, \ + uint32_t offset, uint32_t imm) \ + { \ + TYPE *d = vd; \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + mergemask(&d[H##ESIZE(e)], offset, mask); \ + offset = FN(offset, imm); \ + } \ + mve_advance_vpt(env); \ + return offset; \ + } + +#define DO_VIWDUP(OP, ESIZE, TYPE, FN) \ + uint32_t HELPER(mve_##OP)(CPUARMState *env, void *vd, \ + uint32_t offset, uint32_t wrap, \ + uint32_t imm) \ + { \ + TYPE *d = vd; \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + mergemask(&d[H##ESIZE(e)], offset, mask); \ + offset = FN(offset, wrap, imm); \ + } \ + mve_advance_vpt(env); \ + return offset; \ + } + +#define DO_VIDUP_ALL(OP, FN) \ + DO_VIDUP(OP##b, 1, int8_t, FN) \ + DO_VIDUP(OP##h, 2, int16_t, FN) \ + DO_VIDUP(OP##w, 4, int32_t, FN) + +#define DO_VIWDUP_ALL(OP, FN) \ + DO_VIWDUP(OP##b, 1, int8_t, FN) \ + DO_VIWDUP(OP##h, 2, int16_t, FN) \ + DO_VIWDUP(OP##w, 4, int32_t, FN) + +static uint32_t do_add_wrap(uint32_t offset, uint32_t wrap, uint32_t imm) +{ + offset += imm; + if (offset == wrap) { + offset = 0; + } + return offset; +} + +static uint32_t do_sub_wrap(uint32_t offset, uint32_t wrap, uint32_t imm) +{ + if (offset == 0) { + offset = wrap; + } + offset -= imm; + return offset; +} + +DO_VIDUP_ALL(vidup, DO_ADD) +DO_VIWDUP_ALL(viwdup, do_add_wrap) +DO_VIWDUP_ALL(vdwdup, do_sub_wrap) diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index d318f34b2b..a220521c00 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -25,6 +25,11 @@ #include "translate.h" #include "translate-a32.h" +static inline int vidup_imm(DisasContext *s, int x) +{ + return 1 << x; +} + /* Include the generated decoder */ #include "decode-mve.c.inc" @@ -36,6 +41,8 @@ typedef void MVEGenTwoOpShiftFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32); typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64); typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32); typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64); +typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32); +typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32); /* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */ static inline long mve_qreg_offset(unsigned reg) @@ -1059,3 +1066,116 @@ static bool trans_VSHLC(DisasContext *s, arg_VSHLC *a) mve_update_eci(s); return true; } + +static bool do_vidup(DisasContext *s, arg_vidup *a, MVEGenVIDUPFn *fn) +{ + TCGv_ptr qd; + TCGv_i32 rn; + + /* + * Vector increment/decrement with wrap and duplicate (VIDUP, VDDUP). + * This fills the vector with elements of successively increasing + * or decreasing values, starting from Rn. + */ + if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd)) { + return false; + } + if (a->size == MO_64) { + /* size 0b11 is another encoding */ + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + qd = mve_qreg_ptr(a->qd); + rn = load_reg(s, a->rn); + fn(rn, cpu_env, qd, rn, tcg_constant_i32(a->imm)); + store_reg(s, a->rn, rn); + tcg_temp_free_ptr(qd); + mve_update_eci(s); + return true; +} + +static bool do_viwdup(DisasContext *s, arg_viwdup *a, MVEGenVIWDUPFn *fn) +{ + TCGv_ptr qd; + TCGv_i32 rn, rm; + + /* + * Vector increment/decrement with wrap and duplicate (VIWDUp, VDWDUP) + * This fills the vector with elements of successively increasing + * or decreasing values, starting from Rn. Rm specifies a point where + * the count wraps back around to 0. The updated offset is written back + * to Rn. + */ + if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd)) { + return false; + } + if (!fn || a->rm == 13 || a->rm == 15) { + /* + * size 0b11 is another encoding; Rm == 13 is UNPREDICTABLE; + * Rm == 13 is VIWDUP, VDWDUP. + */ + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + qd = mve_qreg_ptr(a->qd); + rn = load_reg(s, a->rn); + rm = load_reg(s, a->rm); + fn(rn, cpu_env, qd, rn, rm, tcg_constant_i32(a->imm)); + store_reg(s, a->rn, rn); + tcg_temp_free_ptr(qd); + tcg_temp_free_i32(rm); + mve_update_eci(s); + return true; +} + +static bool trans_VIDUP(DisasContext *s, arg_vidup *a) +{ + static MVEGenVIDUPFn * const fns[] = { + gen_helper_mve_vidupb, + gen_helper_mve_viduph, + gen_helper_mve_vidupw, + NULL, + }; + return do_vidup(s, a, fns[a->size]); +} + +static bool trans_VDDUP(DisasContext *s, arg_vidup *a) +{ + static MVEGenVIDUPFn * const fns[] = { + gen_helper_mve_vidupb, + gen_helper_mve_viduph, + gen_helper_mve_vidupw, + NULL, + }; + /* VDDUP is just like VIDUP but with a negative immediate */ + a->imm = -a->imm; + return do_vidup(s, a, fns[a->size]); +} + +static bool trans_VIWDUP(DisasContext *s, arg_viwdup *a) +{ + static MVEGenVIWDUPFn * const fns[] = { + gen_helper_mve_viwdupb, + gen_helper_mve_viwduph, + gen_helper_mve_viwdupw, + NULL, + }; + return do_viwdup(s, a, fns[a->size]); +} + +static bool trans_VDWDUP(DisasContext *s, arg_viwdup *a) +{ + static MVEGenVIWDUPFn * const fns[] = { + gen_helper_mve_vdwdupb, + gen_helper_mve_vdwduph, + gen_helper_mve_vdwdupw, + NULL, + }; + return do_viwdup(s, a, fns[a->size]); +} From 552517861c6553941e8bfbbafbf97b6a6d992636 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:50 +0100 Subject: [PATCH 14/44] target/arm: Factor out gen_vpst() Factor out the "generate code to update VPR.MASK01/MASK23" part of trans_VPST(); we are going to want to reuse it for the VPT insns. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/translate-mve.c | 31 +++++++++++++++++-------------- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index a220521c00..6d8da36146 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -737,33 +737,24 @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a) return do_long_dual_acc(s, a, fns[a->x]); } -static bool trans_VPST(DisasContext *s, arg_VPST *a) +static void gen_vpst(DisasContext *s, uint32_t mask) { - TCGv_i32 vpr; - - /* mask == 0 is a "related encoding" */ - if (!dc_isar_feature(aa32_mve, s) || !a->mask) { - return false; - } - if (!mve_eci_check(s) || !vfp_access_check(s)) { - return true; - } /* * Set the VPR mask fields. We take advantage of MASK01 and MASK23 * being adjacent fields in the register. * - * This insn is not predicated, but it is subject to beat-wise + * Updating the masks is not predicated, but it is subject to beat-wise * execution, and the mask is updated on the odd-numbered beats. * So if PSR.ECI says we should skip beat 1, we mustn't update the * 01 mask field. */ - vpr = load_cpu_field(v7m.vpr); + TCGv_i32 vpr = load_cpu_field(v7m.vpr); switch (s->eci) { case ECI_NONE: case ECI_A0: /* Update both 01 and 23 fields */ tcg_gen_deposit_i32(vpr, vpr, - tcg_constant_i32(a->mask | (a->mask << 4)), + tcg_constant_i32(mask | (mask << 4)), R_V7M_VPR_MASK01_SHIFT, R_V7M_VPR_MASK01_LENGTH + R_V7M_VPR_MASK23_LENGTH); break; @@ -772,13 +763,25 @@ static bool trans_VPST(DisasContext *s, arg_VPST *a) case ECI_A0A1A2B0: /* Update only the 23 mask field */ tcg_gen_deposit_i32(vpr, vpr, - tcg_constant_i32(a->mask), + tcg_constant_i32(mask), R_V7M_VPR_MASK23_SHIFT, R_V7M_VPR_MASK23_LENGTH); break; default: g_assert_not_reached(); } store_cpu_field(vpr, v7m.vpr); +} + +static bool trans_VPST(DisasContext *s, arg_VPST *a) +{ + /* mask == 0 is a "related encoding" */ + if (!dc_isar_feature(aa32_mve, s) || !a->mask) { + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + gen_vpst(s, a->mask); mve_update_and_store_eci(s); return true; } From eff5d9a9bdbabfb1ccdb62c1c61311a575b11e9c Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:51 +0100 Subject: [PATCH 15/44] target/arm: Implement MVE integer vector comparisons Implement the MVE integer vector comparison instructions. These are "VCMP (vector)" encodings T1, T2 and T3, and "VPT (vector)" encodings T1, T2 and T3. These insns compare corresponding elements in each vector, and update the VPR.P0 predicate bits with the results of the comparison. VPT also sets the VPR.MASK01 and VPR.MASK23 fields -- it is effectively "VCMP then VPST". Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 32 ++++++++++++++++++++++ target/arm/mve.decode | 18 +++++++++++- target/arm/mve_helper.c | 56 ++++++++++++++++++++++++++++++++++++++ target/arm/translate-mve.c | 47 ++++++++++++++++++++++++++++++++ 4 files changed, 152 insertions(+), 1 deletion(-) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index b9af03cc03..ca5a6ab51c 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -480,3 +480,35 @@ DEF_HELPER_FLAGS_3(mve_uqshl, TCG_CALL_NO_RWG, i32, env, i32, i32) DEF_HELPER_FLAGS_3(mve_sqshl, TCG_CALL_NO_RWG, i32, env, i32, i32) DEF_HELPER_FLAGS_3(mve_uqrshl, TCG_CALL_NO_RWG, i32, env, i32, i32) DEF_HELPER_FLAGS_3(mve_sqrshr, TCG_CALL_NO_RWG, i32, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vcmpeqb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpeqh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpeqw, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vcmpneb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpneh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpnew, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vcmpcsb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpcsh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpcsw, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vcmphib, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmphih, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmphiw, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vcmpgeb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpgeh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpgew, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vcmpltb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmplth, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpltw, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vcmpgtb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpgth, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpgtw, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vcmpleb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmpleh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vcmplew, TCG_CALL_NO_WG, void, env, ptr, ptr) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 88c9c18ebf..76bbf9a613 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -37,6 +37,7 @@ &2shift qd qm shift size &vidup qd rn size imm &viwdup qd rn rm size imm +&vcmp qm qn size mask @vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0 # Note that both Rn and Qd are 3 bits only (no D bit) @@ -86,6 +87,10 @@ @2_shr_w .... .... .. 1 ..... .... .... .... .... &2shift qd=%qd qm=%qm \ size=2 shift=%rshift_i5 +# Vector comparison; 4-bit Qm but 3-bit Qn +%mask_22_13 22:1 13:3 +@vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13 + # Vector loads and stores # Widening loads and narrowing stores: @@ -345,7 +350,6 @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar } # Predicate operations -%mask_22_13 22:1 13:3 VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13 # Logical immediate operations (1 reg and modified-immediate) @@ -458,3 +462,15 @@ VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_b VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_h VSHLC 111 0 1110 1 . 1 imm:5 ... 0 1111 1100 rdm:4 qd=%qd + +# Comparisons. We expand out the conditions which are split across +# encodings T1, T2, T3 and the fc bits. These include VPT, which is +# effectively "VCMP then VPST". A plain "VCMP" has a mask field of zero. +VCMPEQ 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 0 @vcmp +VCMPNE 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 0 @vcmp +VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp +VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp +VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp +VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp +VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp +VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 38b4181db2..b0b380b94b 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1758,3 +1758,59 @@ static uint32_t do_sub_wrap(uint32_t offset, uint32_t wrap, uint32_t imm) DO_VIDUP_ALL(vidup, DO_ADD) DO_VIWDUP_ALL(viwdup, do_add_wrap) DO_VIWDUP_ALL(vdwdup, do_sub_wrap) + +/* + * Vector comparison. + * P0 bits for non-executed beats (where eci_mask is 0) are unchanged. + * P0 bits for predicated lanes in executed beats (where mask is 0) are 0. + * P0 bits otherwise are updated with the results of the comparisons. + * We must also keep unchanged the MASK fields at the top of v7m.vpr. + */ +#define DO_VCMP(OP, ESIZE, TYPE, FN) \ + void HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, void *vm) \ + { \ + TYPE *n = vn, *m = vm; \ + uint16_t mask = mve_element_mask(env); \ + uint16_t eci_mask = mve_eci_mask(env); \ + uint16_t beatpred = 0; \ + uint16_t emask = MAKE_64BIT_MASK(0, ESIZE); \ + unsigned e; \ + for (e = 0; e < 16 / ESIZE; e++) { \ + bool r = FN(n[H##ESIZE(e)], m[H##ESIZE(e)]); \ + /* Comparison sets 0/1 bits for each byte in the element */ \ + beatpred |= r * emask; \ + emask <<= ESIZE; \ + } \ + beatpred &= mask; \ + env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | \ + (beatpred & eci_mask); \ + mve_advance_vpt(env); \ + } + +#define DO_VCMP_S(OP, FN) \ + DO_VCMP(OP##b, 1, int8_t, FN) \ + DO_VCMP(OP##h, 2, int16_t, FN) \ + DO_VCMP(OP##w, 4, int32_t, FN) + +#define DO_VCMP_U(OP, FN) \ + DO_VCMP(OP##b, 1, uint8_t, FN) \ + DO_VCMP(OP##h, 2, uint16_t, FN) \ + DO_VCMP(OP##w, 4, uint32_t, FN) + +#define DO_EQ(N, M) ((N) == (M)) +#define DO_NE(N, M) ((N) != (M)) +#define DO_EQ(N, M) ((N) == (M)) +#define DO_EQ(N, M) ((N) == (M)) +#define DO_GE(N, M) ((N) >= (M)) +#define DO_LT(N, M) ((N) < (M)) +#define DO_GT(N, M) ((N) > (M)) +#define DO_LE(N, M) ((N) <= (M)) + +DO_VCMP_U(vcmpeq, DO_EQ) +DO_VCMP_U(vcmpne, DO_NE) +DO_VCMP_U(vcmpcs, DO_GE) +DO_VCMP_U(vcmphi, DO_GT) +DO_VCMP_S(vcmpge, DO_GE) +DO_VCMP_S(vcmplt, DO_LT) +DO_VCMP_S(vcmpgt, DO_GT) +DO_VCMP_S(vcmple, DO_LE) diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 6d8da36146..2d7211b527 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -43,6 +43,7 @@ typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32); typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64); typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32); typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32); +typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr); /* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */ static inline long mve_qreg_offset(unsigned reg) @@ -1182,3 +1183,49 @@ static bool trans_VDWDUP(DisasContext *s, arg_viwdup *a) }; return do_viwdup(s, a, fns[a->size]); } + +static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn) +{ + TCGv_ptr qn, qm; + + if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qm) || + !fn) { + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + qn = mve_qreg_ptr(a->qn); + qm = mve_qreg_ptr(a->qm); + fn(cpu_env, qn, qm); + tcg_temp_free_ptr(qn); + tcg_temp_free_ptr(qm); + if (a->mask) { + /* VPT */ + gen_vpst(s, a->mask); + } + mve_update_eci(s); + return true; +} + +#define DO_VCMP(INSN, FN) \ + static bool trans_##INSN(DisasContext *s, arg_vcmp *a) \ + { \ + static MVEGenCmpFn * const fns[] = { \ + gen_helper_mve_##FN##b, \ + gen_helper_mve_##FN##h, \ + gen_helper_mve_##FN##w, \ + NULL, \ + }; \ + return do_vcmp(s, a, fns[a->size]); \ + } + +DO_VCMP(VCMPEQ, vcmpeq) +DO_VCMP(VCMPNE, vcmpne) +DO_VCMP(VCMPCS, vcmpcs) +DO_VCMP(VCMPHI, vcmphi) +DO_VCMP(VCMPGE, vcmpge) +DO_VCMP(VCMPLT, vcmplt) +DO_VCMP(VCMPGT, vcmpgt) +DO_VCMP(VCMPLE, vcmple) From cce81873bcc163a86488deec1e122c303c6762a4 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:51 +0100 Subject: [PATCH 16/44] target/arm: Implement MVE integer vector-vs-scalar comparisons Implement the MVE integer vector comparison instructions that compare each element against a scalar from a general purpose register. These are "VCMP (vector)" encodings T4, T5 and T6 and "VPT (vector)" encodings T4, T5 and T6. We have to move the decodetree pattern for VPST, because it overlaps with VCMP T4 with size = 0b11. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 32 +++++++++++++++++++++++++++ target/arm/mve.decode | 18 +++++++++++++--- target/arm/mve_helper.c | 44 +++++++++++++++++++++++++++++++------- target/arm/translate-mve.c | 43 +++++++++++++++++++++++++++++++++++++ 4 files changed, 126 insertions(+), 11 deletions(-) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index ca5a6ab51c..4f9903e66e 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -512,3 +512,35 @@ DEF_HELPER_FLAGS_3(mve_vcmpgtw, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vcmpleb, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vcmpleh, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vcmplew, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32) + +DEF_HELPER_FLAGS_3(mve_vcmpne_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmpne_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmpne_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32) + +DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32) + +DEF_HELPER_FLAGS_3(mve_vcmphi_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmphi_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmphi_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32) + +DEF_HELPER_FLAGS_3(mve_vcmpge_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmpge_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmpge_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32) + +DEF_HELPER_FLAGS_3(mve_vcmplt_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmplt_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmplt_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32) + +DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32) + +DEF_HELPER_FLAGS_3(mve_vcmple_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmple_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vcmple_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 76bbf9a613..ef708ba80f 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -38,6 +38,7 @@ &vidup qd rn size imm &viwdup qd rn rm size imm &vcmp qm qn size mask +&vcmp_scalar qn rm size mask @vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0 # Note that both Rn and Qd are 3 bits only (no D bit) @@ -90,6 +91,8 @@ # Vector comparison; 4-bit Qm but 3-bit Qn %mask_22_13 22:1 13:3 @vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13 +@vcmp_scalar .... .... .. size:2 qn:3 . .... .... .... rm:4 &vcmp_scalar \ + mask=%mask_22_13 # Vector loads and stores @@ -349,9 +352,6 @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar rdahi=%rdahi rdalo=%rdalo } -# Predicate operations -VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13 - # Logical immediate operations (1 reg and modified-immediate) # The cmode/op bits here decode VORR/VBIC/VMOV/VMVN, but @@ -474,3 +474,15 @@ VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp + +{ + VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13 + VCMPEQ_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 0 0 .... @vcmp_scalar +} +VCMPNE_scalar 1111 1110 0 . .. ... 1 ... 0 1111 1 1 0 0 .... @vcmp_scalar +VCMPCS_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 1 0 .... @vcmp_scalar +VCMPHI_scalar 1111 1110 0 . .. ... 1 ... 0 1111 1 1 1 0 .... @vcmp_scalar +VCMPGE_scalar 1111 1110 0 . .. ... 1 ... 1 1111 0 1 0 0 .... @vcmp_scalar +VCMPLT_scalar 1111 1110 0 . .. ... 1 ... 1 1111 1 1 0 0 .... @vcmp_scalar +VCMPGT_scalar 1111 1110 0 . .. ... 1 ... 1 1111 0 1 1 0 .... @vcmp_scalar +VCMPLE_scalar 1111 1110 0 . .. ... 1 ... 1 1111 1 1 1 0 .... @vcmp_scalar diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index b0b380b94b..1a021a9a81 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1787,15 +1787,43 @@ DO_VIWDUP_ALL(vdwdup, do_sub_wrap) mve_advance_vpt(env); \ } -#define DO_VCMP_S(OP, FN) \ - DO_VCMP(OP##b, 1, int8_t, FN) \ - DO_VCMP(OP##h, 2, int16_t, FN) \ - DO_VCMP(OP##w, 4, int32_t, FN) +#define DO_VCMP_SCALAR(OP, ESIZE, TYPE, FN) \ + void HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \ + uint32_t rm) \ + { \ + TYPE *n = vn; \ + uint16_t mask = mve_element_mask(env); \ + uint16_t eci_mask = mve_eci_mask(env); \ + uint16_t beatpred = 0; \ + uint16_t emask = MAKE_64BIT_MASK(0, ESIZE); \ + unsigned e; \ + for (e = 0; e < 16 / ESIZE; e++) { \ + bool r = FN(n[H##ESIZE(e)], (TYPE)rm); \ + /* Comparison sets 0/1 bits for each byte in the element */ \ + beatpred |= r * emask; \ + emask <<= ESIZE; \ + } \ + beatpred &= mask; \ + env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | \ + (beatpred & eci_mask); \ + mve_advance_vpt(env); \ + } -#define DO_VCMP_U(OP, FN) \ - DO_VCMP(OP##b, 1, uint8_t, FN) \ - DO_VCMP(OP##h, 2, uint16_t, FN) \ - DO_VCMP(OP##w, 4, uint32_t, FN) +#define DO_VCMP_S(OP, FN) \ + DO_VCMP(OP##b, 1, int8_t, FN) \ + DO_VCMP(OP##h, 2, int16_t, FN) \ + DO_VCMP(OP##w, 4, int32_t, FN) \ + DO_VCMP_SCALAR(OP##_scalarb, 1, int8_t, FN) \ + DO_VCMP_SCALAR(OP##_scalarh, 2, int16_t, FN) \ + DO_VCMP_SCALAR(OP##_scalarw, 4, int32_t, FN) + +#define DO_VCMP_U(OP, FN) \ + DO_VCMP(OP##b, 1, uint8_t, FN) \ + DO_VCMP(OP##h, 2, uint16_t, FN) \ + DO_VCMP(OP##w, 4, uint32_t, FN) \ + DO_VCMP_SCALAR(OP##_scalarb, 1, uint8_t, FN) \ + DO_VCMP_SCALAR(OP##_scalarh, 2, uint16_t, FN) \ + DO_VCMP_SCALAR(OP##_scalarw, 4, uint32_t, FN) #define DO_EQ(N, M) ((N) == (M)) #define DO_NE(N, M) ((N) != (M)) diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 2d7211b527..6c6f159aa3 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -44,6 +44,7 @@ typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64); typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32); typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32); typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr); +typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32); /* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */ static inline long mve_qreg_offset(unsigned reg) @@ -1209,6 +1210,37 @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn) return true; } +static bool do_vcmp_scalar(DisasContext *s, arg_vcmp_scalar *a, + MVEGenScalarCmpFn *fn) +{ + TCGv_ptr qn; + TCGv_i32 rm; + + if (!dc_isar_feature(aa32_mve, s) || !fn || a->rm == 13) { + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + qn = mve_qreg_ptr(a->qn); + if (a->rm == 15) { + /* Encoding Rm=0b1111 means "constant zero" */ + rm = tcg_constant_i32(0); + } else { + rm = load_reg(s, a->rm); + } + fn(cpu_env, qn, rm); + tcg_temp_free_ptr(qn); + tcg_temp_free_i32(rm); + if (a->mask) { + /* VPT */ + gen_vpst(s, a->mask); + } + mve_update_eci(s); + return true; +} + #define DO_VCMP(INSN, FN) \ static bool trans_##INSN(DisasContext *s, arg_vcmp *a) \ { \ @@ -1219,6 +1251,17 @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn) NULL, \ }; \ return do_vcmp(s, a, fns[a->size]); \ + } \ + static bool trans_##INSN##_scalar(DisasContext *s, \ + arg_vcmp_scalar *a) \ + { \ + static MVEGenScalarCmpFn * const fns[] = { \ + gen_helper_mve_##FN##_scalarb, \ + gen_helper_mve_##FN##_scalarh, \ + gen_helper_mve_##FN##_scalarw, \ + NULL, \ + }; \ + return do_vcmp_scalar(s, a, fns[a->size]); \ } DO_VCMP(VCMPEQ, vcmpeq) From c386443b163965e44ae7a7f7858ec7985e97926b Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:51 +0100 Subject: [PATCH 17/44] target/arm: Implement MVE VPSEL Implement the MVE VPSEL insn, which sets each byte of the destination vector Qd to the byte from either Qn or Qm depending on the value of the corresponding bit in VPR.P0. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 2 ++ target/arm/mve.decode | 7 +++++-- target/arm/mve_helper.c | 19 +++++++++++++++++++ target/arm/translate-mve.c | 2 ++ 4 files changed, 28 insertions(+), 2 deletions(-) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 4f9903e66e..16c4c3b8f6 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -82,6 +82,8 @@ DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) +DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) + DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index ef708ba80f..4bd20a9a31 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -468,8 +468,11 @@ VSHLC 111 0 1110 1 . 1 imm:5 ... 0 1111 1100 rdm:4 qd=%qd # effectively "VCMP then VPST". A plain "VCMP" has a mask field of zero. VCMPEQ 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 0 @vcmp VCMPNE 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 0 @vcmp -VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp -VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp +{ + VPSEL 1111 1110 0 . 11 ... 1 ... 0 1111 . 0 . 0 ... 1 @2op_nosz + VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp + VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp +} VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 1a021a9a81..03171766b5 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1842,3 +1842,22 @@ DO_VCMP_S(vcmpge, DO_GE) DO_VCMP_S(vcmplt, DO_LT) DO_VCMP_S(vcmpgt, DO_GT) DO_VCMP_S(vcmple, DO_LE) + +void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm) +{ + /* + * Qd[n] = VPR.P0[n] ? Qn[n] : Qm[n] + * but note that whether bytes are written to Qd is still subject + * to (all forms of) predication in the usual way. + */ + uint64_t *d = vd, *n = vn, *m = vm; + uint16_t mask = mve_element_mask(env); + uint16_t p0 = FIELD_EX32(env->v7m.vpr, V7M_VPR, P0); + unsigned e; + for (e = 0; e < 16 / 8; e++, mask >>= 8, p0 >>= 8) { + uint64_t r = m[H8(e)]; + mergemask(&r, n[H8(e)], p0); + mergemask(&d[H8(e)], r, mask); + } + mve_advance_vpt(env); +} diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 6c6f159aa3..aa38218e08 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -376,6 +376,8 @@ DO_LOGIC(VORR, gen_helper_mve_vorr) DO_LOGIC(VORN, gen_helper_mve_vorn) DO_LOGIC(VEOR, gen_helper_mve_veor) +DO_LOGIC(VPSEL, gen_helper_mve_vpsel) + #define DO_2OP(INSN, FN) \ static bool trans_##INSN(DisasContext *s, arg_2op *a) \ { \ From 6b895bf8fb088a04a91714a555d2b6234cf1e98d Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:52 +0100 Subject: [PATCH 18/44] target/arm: Implement MVE VMLAS Implement the MVE VMLAS insn, which multiplies a vector by a vector and adds a scalar. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 4 ++++ target/arm/mve.decode | 3 +++ target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++ target/arm/translate-mve.c | 1 + 4 files changed, 34 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 16c4c3b8f6..715b1bbd01 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -347,6 +347,10 @@ DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i3 DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64) DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64) DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 4bd20a9a31..226b74790b 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -345,6 +345,9 @@ VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar +# The U bit (28) is don't-care because it does not affect the result +VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar + # Vector add across vector { VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 03171766b5..ab02a1e60f 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -948,6 +948,22 @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w) mve_advance_vpt(env); \ } +/* "accumulating" version where FN takes d as well as n and m */ +#define DO_2OP_ACC_SCALAR(OP, ESIZE, TYPE, FN) \ + void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \ + uint32_t rm) \ + { \ + TYPE *d = vd, *n = vn; \ + TYPE m = rm; \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + mergemask(&d[H##ESIZE(e)], \ + FN(d[H##ESIZE(e)], n[H##ESIZE(e)], m), mask); \ + } \ + mve_advance_vpt(env); \ + } + /* provide unsigned 2-op scalar helpers for all sizes */ #define DO_2OP_SCALAR_U(OP, FN) \ DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \ @@ -958,6 +974,11 @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w) DO_2OP_SCALAR(OP##h, 2, int16_t, FN) \ DO_2OP_SCALAR(OP##w, 4, int32_t, FN) +#define DO_2OP_ACC_SCALAR_U(OP, FN) \ + DO_2OP_ACC_SCALAR(OP##b, 1, uint8_t, FN) \ + DO_2OP_ACC_SCALAR(OP##h, 2, uint16_t, FN) \ + DO_2OP_ACC_SCALAR(OP##w, 4, uint32_t, FN) + DO_2OP_SCALAR_U(vadd_scalar, DO_ADD) DO_2OP_SCALAR_U(vsub_scalar, DO_SUB) DO_2OP_SCALAR_U(vmul_scalar, DO_MUL) @@ -987,6 +1008,11 @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B) DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H) DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W) +/* Vector by vector plus scalar */ +#define DO_VMLAS(D, N, M) ((N) * (D) + (M)) + +DO_2OP_ACC_SCALAR_U(vmlas, DO_VMLAS) + /* * Long saturating scalar ops. As with DO_2OP_L, TYPE and H are for the * input (smaller) type and LESIZE, LTYPE, LH for the output (long) type. diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index aa38218e08..b56c91db2a 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -596,6 +596,7 @@ DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar) DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar) DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar) DO_2OP_SCALAR(VBRSR, vbrsr) +DO_2OP_SCALAR(VMLAS, vmlas) static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a) { From 1b15a97d4cf99efdd60e60e0bfd2d185174ab0eb Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:52 +0100 Subject: [PATCH 19/44] target/arm: Implement MVE shift-by-scalar Implement the MVE instructions which perform shifts by a scalar. These are VSHL T2, VRSHL T2, VQSHL T1 and VQRSHL T2. They take the shift amount in a general purpose register and shift every element in the vector by that amount. Mostly we can reuse the helper functions for shift-by-immediate; we do need two new helpers for VQRSHL. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 8 +++++++ target/arm/mve.decode | 23 ++++++++++++++++--- target/arm/mve_helper.c | 2 ++ target/arm/translate-mve.c | 46 ++++++++++++++++++++++++++++++++++++++ 4 files changed, 76 insertions(+), 3 deletions(-) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 715b1bbd01..0ee5ea3cab 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -414,6 +414,14 @@ DEF_HELPER_FLAGS_4(mve_vrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqrshli_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqrshli_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqrshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + +DEF_HELPER_FLAGS_4(mve_vqrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + DEF_HELPER_FLAGS_4(mve_vshllbsb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vshllbsh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vshllbub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 226b74790b..eb26b103d1 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -39,6 +39,7 @@ &viwdup qd rn rm size imm &vcmp qm qn size mask &vcmp_scalar qn rm size mask +&shl_scalar qda rm size @vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0 # Note that both Rn and Qd are 3 bits only (no D bit) @@ -88,6 +89,8 @@ @2_shr_w .... .... .. 1 ..... .... .... .... .... &2shift qd=%qd qm=%qm \ size=2 shift=%rshift_i5 +@shl_scalar .... .... .... size:2 .. .... .... .... rm:4 &shl_scalar qda=%qd + # Vector comparison; 4-bit Qm but 3-bit Qn %mask_22_13 22:1 13:3 @vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13 @@ -320,7 +323,23 @@ VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_no VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar VSUB_scalar 1110 1110 0 . .. ... 1 ... 1 1111 . 100 .... @2scalar -VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar + +{ + VSHL_S_scalar 1110 1110 0 . 11 .. 01 ... 1 1110 0110 .... @shl_scalar + VRSHL_S_scalar 1110 1110 0 . 11 .. 11 ... 1 1110 0110 .... @shl_scalar + VQSHL_S_scalar 1110 1110 0 . 11 .. 01 ... 1 1110 1110 .... @shl_scalar + VQRSHL_S_scalar 1110 1110 0 . 11 .. 11 ... 1 1110 1110 .... @shl_scalar + VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar +} + +{ + VSHL_U_scalar 1111 1110 0 . 11 .. 01 ... 1 1110 0110 .... @shl_scalar + VRSHL_U_scalar 1111 1110 0 . 11 .. 11 ... 1 1110 0110 .... @shl_scalar + VQSHL_U_scalar 1111 1110 0 . 11 .. 01 ... 1 1110 1110 .... @shl_scalar + VQRSHL_U_scalar 1111 1110 0 . 11 .. 11 ... 1 1110 1110 .... @shl_scalar + VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar +} + VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar @@ -340,8 +359,6 @@ VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar size=%size_28 } -VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar - VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index ab02a1e60f..ac608fc524 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1334,6 +1334,8 @@ DO_2SHIFT_SAT_S(vqshli_s, DO_SQSHL_OP) DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP) DO_2SHIFT_U(vrshli_u, DO_VRSHLU) DO_2SHIFT_S(vrshli_s, DO_VRSHLS) +DO_2SHIFT_SAT_U(vqrshli_u, DO_UQRSHL_OP) +DO_2SHIFT_SAT_S(vqrshli_s, DO_SQRSHL_OP) /* Shift-and-insert; we always work with 64 bits at a time */ #define DO_2SHIFT_INSERT(OP, ESIZE, SHIFTFN, MASKFN) \ diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index b56c91db2a..44731fc4eb 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -1003,6 +1003,52 @@ DO_2SHIFT(VRSHRI_U, vrshli_u, true) DO_2SHIFT(VSRI, vsri, false) DO_2SHIFT(VSLI, vsli, false) +static bool do_2shift_scalar(DisasContext *s, arg_shl_scalar *a, + MVEGenTwoOpShiftFn *fn) +{ + TCGv_ptr qda; + TCGv_i32 rm; + + if (!dc_isar_feature(aa32_mve, s) || + !mve_check_qreg_bank(s, a->qda) || + a->rm == 13 || a->rm == 15 || !fn) { + /* Rm cases are UNPREDICTABLE */ + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + qda = mve_qreg_ptr(a->qda); + rm = load_reg(s, a->rm); + fn(cpu_env, qda, qda, rm); + tcg_temp_free_ptr(qda); + tcg_temp_free_i32(rm); + mve_update_eci(s); + return true; +} + +#define DO_2SHIFT_SCALAR(INSN, FN) \ + static bool trans_##INSN(DisasContext *s, arg_shl_scalar *a) \ + { \ + static MVEGenTwoOpShiftFn * const fns[] = { \ + gen_helper_mve_##FN##b, \ + gen_helper_mve_##FN##h, \ + gen_helper_mve_##FN##w, \ + NULL, \ + }; \ + return do_2shift_scalar(s, a, fns[a->size]); \ + } + +DO_2SHIFT_SCALAR(VSHL_S_scalar, vshli_s) +DO_2SHIFT_SCALAR(VSHL_U_scalar, vshli_u) +DO_2SHIFT_SCALAR(VRSHL_S_scalar, vrshli_s) +DO_2SHIFT_SCALAR(VRSHL_U_scalar, vrshli_u) +DO_2SHIFT_SCALAR(VQSHL_S_scalar, vqshli_s) +DO_2SHIFT_SCALAR(VQSHL_U_scalar, vqshli_u) +DO_2SHIFT_SCALAR(VQRSHL_S_scalar, vqrshli_s) +DO_2SHIFT_SCALAR(VQRSHL_U_scalar, vqrshli_u) + #define DO_VSHLL(INSN, FN) \ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \ { \ From 345910f8c1d687404b62194d929ca32f2ab54e80 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:52 +0100 Subject: [PATCH 20/44] target/arm: Move 'x' and 'a' bit definitions into vmlaldav formats All the users of the vmlaldav formats have an 'x bit in bit 12 and an 'a' bit in bit 5; move these to the format rather than specifying them in each insn pattern. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve.decode | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index eb26b103d1..bdcd660aaf 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -305,19 +305,19 @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2 &vmlaldav rdahi rdalo size qn qm x a -@vmlaldav .... .... . ... ... . ... . .... .... qm:3 . \ +@vmlaldav .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \ qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav -@vmlaldav_nosz .... .... . ... ... . ... . .... .... qm:3 . \ +@vmlaldav_nosz .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \ qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav -VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav -VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav +VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav +VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav -VMLSLDAV 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav +VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav -VRMLALDAVH_S 1110 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz -VRMLALDAVH_U 1111 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz +VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz +VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz -VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_nosz +VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz # Scalar operations From 688ba4cf33f4976e26124c4c24e9eb738615b0bf Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:52 +0100 Subject: [PATCH 21/44] target/arm: Implement MVE integer min/max across vector Implement the MVE integer min/max across vector insns VMAXV, VMINV, VMAXAV and VMINAV, which find the maximum from the vector elements and a general purpose register, and store the maximum back into the general purpose register. These insns overlap with VRMLALDAVH (they use what would be RdaHi=0b110). Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 20 ++++++++++++ target/arm/mve.decode | 18 +++++++++-- target/arm/mve_helper.c | 66 ++++++++++++++++++++++++++++++++++++++ target/arm/translate-mve.c | 48 +++++++++++++++++++++++++++ 4 files changed, 150 insertions(+), 2 deletions(-) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 0ee5ea3cab..2c66fcba79 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -379,6 +379,26 @@ DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32) DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32) DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vmaxvsb, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vmaxvsh, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vmaxvsw, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vmaxvub, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vmaxvuh, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vmaxvuw, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vmaxavb, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vmaxavh, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vmaxavw, TCG_CALL_NO_WG, i32, env, ptr, i32) + +DEF_HELPER_FLAGS_3(mve_vminvsb, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vminvsh, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vminvsw, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vminvub, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vminvuh, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vminvuw, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vminavb, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vminavh, TCG_CALL_NO_WG, i32, env, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vminavw, TCG_CALL_NO_WG, i32, env, ptr, i32) + DEF_HELPER_FLAGS_3(mve_vaddlv_s, TCG_CALL_NO_WG, i64, env, ptr, i64) DEF_HELPER_FLAGS_3(mve_vaddlv_u, TCG_CALL_NO_WG, i64, env, ptr, i64) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index bdcd660aaf..83dc0300d6 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -40,6 +40,7 @@ &vcmp qm qn size mask &vcmp_scalar qn rm size mask &shl_scalar qda rm size +&vmaxv qm rda size @vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0 # Note that both Rn and Qd are 3 bits only (no D bit) @@ -97,6 +98,8 @@ @vcmp_scalar .... .... .. size:2 qn:3 . .... .... .... rm:4 &vcmp_scalar \ mask=%mask_22_13 +@vmaxv .... .... .... size:2 .. rda:4 .... .... .... &vmaxv qm=%qm + # Vector loads and stores # Widening loads and narrowing stores: @@ -314,8 +317,19 @@ VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav -VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz -VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz +{ + VMAXV_S 1110 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv + VMINV_S 1110 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv + VMAXAV 1110 1110 1110 .. 00 .... 1111 0 0 . 0 ... 0 @vmaxv + VMINAV 1110 1110 1110 .. 00 .... 1111 1 0 . 0 ... 0 @vmaxv + VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz +} + +{ + VMAXV_U 1111 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv + VMINV_U 1111 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv + VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz +} VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index ac608fc524..924ad7f2bd 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1254,6 +1254,72 @@ DO_VADDV(vaddvub, 1, uint8_t) DO_VADDV(vaddvuh, 2, uint16_t) DO_VADDV(vaddvuw, 4, uint32_t) +/* + * Vector max/min across vector. Unlike VADDV, we must + * read ra as the element size, not its full width. + * We work with int64_t internally for simplicity. + */ +#define DO_VMAXMINV(OP, ESIZE, TYPE, RATYPE, FN) \ + uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \ + uint32_t ra_in) \ + { \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + TYPE *m = vm; \ + int64_t ra = (RATYPE)ra_in; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + if (mask & 1) { \ + ra = FN(ra, m[H##ESIZE(e)]); \ + } \ + } \ + mve_advance_vpt(env); \ + return ra; \ + } \ + +#define DO_VMAXMINV_U(INSN, FN) \ + DO_VMAXMINV(INSN##b, 1, uint8_t, uint8_t, FN) \ + DO_VMAXMINV(INSN##h, 2, uint16_t, uint16_t, FN) \ + DO_VMAXMINV(INSN##w, 4, uint32_t, uint32_t, FN) +#define DO_VMAXMINV_S(INSN, FN) \ + DO_VMAXMINV(INSN##b, 1, int8_t, int8_t, FN) \ + DO_VMAXMINV(INSN##h, 2, int16_t, int16_t, FN) \ + DO_VMAXMINV(INSN##w, 4, int32_t, int32_t, FN) + +/* + * Helpers for max and min of absolute values across vector: + * note that we only take the absolute value of 'm', not 'n' + */ +static int64_t do_maxa(int64_t n, int64_t m) +{ + if (m < 0) { + m = -m; + } + return MAX(n, m); +} + +static int64_t do_mina(int64_t n, int64_t m) +{ + if (m < 0) { + m = -m; + } + return MIN(n, m); +} + +DO_VMAXMINV_S(vmaxvs, DO_MAX) +DO_VMAXMINV_U(vmaxvu, DO_MAX) +DO_VMAXMINV_S(vminvs, DO_MIN) +DO_VMAXMINV_U(vminvu, DO_MIN) +/* + * VMAXAV, VMINAV treat the general purpose input as unsigned + * and the vector elements as signed. + */ +DO_VMAXMINV(vmaxavb, 1, int8_t, uint8_t, do_maxa) +DO_VMAXMINV(vmaxavh, 2, int16_t, uint16_t, do_maxa) +DO_VMAXMINV(vmaxavw, 4, int32_t, uint32_t, do_maxa) +DO_VMAXMINV(vminavb, 1, int8_t, uint8_t, do_mina) +DO_VMAXMINV(vminavh, 2, int16_t, uint16_t, do_mina) +DO_VMAXMINV(vminavw, 4, int32_t, uint32_t, do_mina) + #define DO_VADDLV(OP, TYPE, LTYPE) \ uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \ uint64_t ra) \ diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 44731fc4eb..2fce74f86a 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -1321,3 +1321,51 @@ DO_VCMP(VCMPGE, vcmpge) DO_VCMP(VCMPLT, vcmplt) DO_VCMP(VCMPGT, vcmpgt) DO_VCMP(VCMPLE, vcmple) + +static bool do_vmaxv(DisasContext *s, arg_vmaxv *a, MVEGenVADDVFn fn) +{ + /* + * MIN/MAX operations across a vector: compute the min or + * max of the initial value in a general purpose register + * and all the elements in the vector, and store it back + * into the general purpose register. + */ + TCGv_ptr qm; + TCGv_i32 rda; + + if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qm) || + !fn || a->rda == 13 || a->rda == 15) { + /* Rda cases are UNPREDICTABLE */ + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + qm = mve_qreg_ptr(a->qm); + rda = load_reg(s, a->rda); + fn(rda, cpu_env, qm, rda); + store_reg(s, a->rda, rda); + tcg_temp_free_ptr(qm); + mve_update_eci(s); + return true; +} + +#define DO_VMAXV(INSN, FN) \ + static bool trans_##INSN(DisasContext *s, arg_vmaxv *a) \ + { \ + static MVEGenVADDVFn * const fns[] = { \ + gen_helper_mve_##FN##b, \ + gen_helper_mve_##FN##h, \ + gen_helper_mve_##FN##w, \ + NULL, \ + }; \ + return do_vmaxv(s, a, fns[a->size]); \ + } + +DO_VMAXV(VMAXV_S, vmaxvs) +DO_VMAXV(VMAXV_U, vmaxvu) +DO_VMAXV(VMAXAV, vmaxav) +DO_VMAXV(VMINV_S, vminvs) +DO_VMAXV(VMINV_U, vminvu) +DO_VMAXV(VMINAV, vminav) From 7f061c0ab9289cb0ed55eaf09bec1b6cb474e6ee Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:53 +0100 Subject: [PATCH 22/44] target/arm: Implement MVE VABAV Implement the MVE VABAV insn, which computes absolute differences between elements of two vectors and accumulates the result into a general purpose register. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 7 +++++++ target/arm/mve.decode | 6 ++++++ target/arm/mve_helper.c | 26 +++++++++++++++++++++++ target/arm/translate-mve.c | 43 ++++++++++++++++++++++++++++++++++++++ 4 files changed, 82 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 2c66fcba79..c7e7aab2cb 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -402,6 +402,13 @@ DEF_HELPER_FLAGS_3(mve_vminavw, TCG_CALL_NO_WG, i32, env, ptr, i32) DEF_HELPER_FLAGS_3(mve_vaddlv_s, TCG_CALL_NO_WG, i64, env, ptr, i64) DEF_HELPER_FLAGS_3(mve_vaddlv_u, TCG_CALL_NO_WG, i64, env, ptr, i64) +DEF_HELPER_FLAGS_4(mve_vabavsb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vabavsh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vabavsw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vabavub, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vabavuh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vabavuw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) + DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64) DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64) DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 83dc0300d6..c8a06edca7 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -41,6 +41,7 @@ &vcmp_scalar qn rm size mask &shl_scalar qda rm size &vmaxv qm rda size +&vabav qn qm rda size @vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0 # Note that both Rn and Qd are 3 bits only (no D bit) @@ -386,6 +387,11 @@ VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar rdahi=%rdahi rdalo=%rdalo } +@vabav .... .... .. size:2 .... rda:4 .... .... .... &vabav qn=%qn qm=%qm + +VABAV_S 111 0 1110 10 .. ... 0 .... 1111 . 0 . 0 ... 1 @vabav +VABAV_U 111 1 1110 10 .. ... 0 .... 1111 . 0 . 0 ... 1 @vabav + # Logical immediate operations (1 reg and modified-immediate) # The cmode/op bits here decode VORR/VBIC/VMOV/VMVN, but diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 924ad7f2bd..fed0f3cd61 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1320,6 +1320,32 @@ DO_VMAXMINV(vminavb, 1, int8_t, uint8_t, do_mina) DO_VMAXMINV(vminavh, 2, int16_t, uint16_t, do_mina) DO_VMAXMINV(vminavw, 4, int32_t, uint32_t, do_mina) +#define DO_VABAV(OP, ESIZE, TYPE) \ + uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \ + void *vm, uint32_t ra) \ + { \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + TYPE *m = vm, *n = vn; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + if (mask & 1) { \ + int64_t n0 = n[H##ESIZE(e)]; \ + int64_t m0 = m[H##ESIZE(e)]; \ + uint32_t r = n0 >= m0 ? (n0 - m0) : (m0 - n0); \ + ra += r; \ + } \ + } \ + mve_advance_vpt(env); \ + return ra; \ + } + +DO_VABAV(vabavsb, 1, int8_t) +DO_VABAV(vabavsh, 2, int16_t) +DO_VABAV(vabavsw, 4, int32_t) +DO_VABAV(vabavub, 1, uint8_t) +DO_VABAV(vabavuh, 2, uint16_t) +DO_VABAV(vabavuw, 4, uint32_t) + #define DO_VADDLV(OP, TYPE, LTYPE) \ uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \ uint64_t ra) \ diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 2fce74f86a..247f6719e6 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -45,6 +45,7 @@ typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32); typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32); typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr); typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32); +typedef void MVEGenVABAVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32); /* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */ static inline long mve_qreg_offset(unsigned reg) @@ -1369,3 +1370,45 @@ DO_VMAXV(VMAXAV, vmaxav) DO_VMAXV(VMINV_S, vminvs) DO_VMAXV(VMINV_U, vminvu) DO_VMAXV(VMINAV, vminav) + +static bool do_vabav(DisasContext *s, arg_vabav *a, MVEGenVABAVFn *fn) +{ + /* Absolute difference accumulated across vector */ + TCGv_ptr qn, qm; + TCGv_i32 rda; + + if (!dc_isar_feature(aa32_mve, s) || + !mve_check_qreg_bank(s, a->qm | a->qn) || + !fn || a->rda == 13 || a->rda == 15) { + /* Rda cases are UNPREDICTABLE */ + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + qm = mve_qreg_ptr(a->qm); + qn = mve_qreg_ptr(a->qn); + rda = load_reg(s, a->rda); + fn(rda, cpu_env, qn, qm, rda); + store_reg(s, a->rda, rda); + tcg_temp_free_ptr(qm); + tcg_temp_free_ptr(qn); + mve_update_eci(s); + return true; +} + +#define DO_VABAV(INSN, FN) \ + static bool trans_##INSN(DisasContext *s, arg_vabav *a) \ + { \ + static MVEGenVABAVFn * const fns[] = { \ + gen_helper_mve_##FN##b, \ + gen_helper_mve_##FN##h, \ + gen_helper_mve_##FN##w, \ + NULL, \ + }; \ + return do_vabav(s, a, fns[a->size]); \ + } + +DO_VABAV(VABAV_S, vabavs) +DO_VABAV(VABAV_U, vabavu) From 54dc78a901188d208a3dfedb0f98230043509120 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:53 +0100 Subject: [PATCH 23/44] target/arm: Implement MVE narrowing moves Implement the MVE narrowing move insns VMOVN, VQMOVN and VQMOVUN. These take a double-width input, narrow it (possibly saturating) and store the result to either the top or bottom half of the output element. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 20 ++++++++++ target/arm/mve.decode | 12 ++++++ target/arm/mve_helper.c | 78 ++++++++++++++++++++++++++++++++++++++ target/arm/translate-mve.c | 22 +++++++++++ 4 files changed, 132 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index c7e7aab2cb..17484f7432 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -76,6 +76,26 @@ DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vmovnth, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vqmovunbb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqmovunbh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqmovuntb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqmovunth, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vqmovnbsb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqmovnbsh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqmovntsb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqmovntsh, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vqmovnbub, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqmovnbuh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqmovntub, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqmovntuh, TCG_CALL_NO_WG, void, env, ptr, ptr) + DEF_HELPER_FLAGS_4(mve_vand, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vbic, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index c8a06edca7..d295a693b1 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -153,6 +153,9 @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h + VQMOVUNB 111 0 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op + VQMOVN_BS 111 0 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op + VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op } @@ -160,6 +163,9 @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h + VMOVNB 111 1 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op + VQMOVN_BU 111 1 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op + VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op } @@ -167,6 +173,9 @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h + VQMOVUNT 111 0 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op + VQMOVN_TS 111 0 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op + VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op } @@ -174,6 +183,9 @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h + VMOVNT 111 1 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op + VQMOVN_TU 111 1 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op + VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op } diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index fed0f3cd61..72c30f360a 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1650,6 +1650,84 @@ DO_VSHRN_SAT_UH(vqrshrnb_uh, vqrshrnt_uh, DO_RSHRN_UH) DO_VSHRN_SAT_SB(vqrshrunbb, vqrshruntb, DO_RSHRUN_B) DO_VSHRN_SAT_SH(vqrshrunbh, vqrshrunth, DO_RSHRUN_H) +#define DO_VMOVN(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE) \ + void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \ + { \ + LTYPE *m = vm; \ + TYPE *d = vd; \ + uint16_t mask = mve_element_mask(env); \ + unsigned le; \ + mask >>= ESIZE * TOP; \ + for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \ + mergemask(&d[H##ESIZE(le * 2 + TOP)], \ + m[H##LESIZE(le)], mask); \ + } \ + mve_advance_vpt(env); \ + } + +DO_VMOVN(vmovnbb, false, 1, uint8_t, 2, uint16_t) +DO_VMOVN(vmovnbh, false, 2, uint16_t, 4, uint32_t) +DO_VMOVN(vmovntb, true, 1, uint8_t, 2, uint16_t) +DO_VMOVN(vmovnth, true, 2, uint16_t, 4, uint32_t) + +#define DO_VMOVN_SAT(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \ + void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \ + { \ + LTYPE *m = vm; \ + TYPE *d = vd; \ + uint16_t mask = mve_element_mask(env); \ + bool qc = false; \ + unsigned le; \ + mask >>= ESIZE * TOP; \ + for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \ + bool sat = false; \ + TYPE r = FN(m[H##LESIZE(le)], &sat); \ + mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \ + qc |= sat & mask & 1; \ + } \ + if (qc) { \ + env->vfp.qc[0] = qc; \ + } \ + mve_advance_vpt(env); \ + } + +#define DO_VMOVN_SAT_UB(BOP, TOP, FN) \ + DO_VMOVN_SAT(BOP, false, 1, uint8_t, 2, uint16_t, FN) \ + DO_VMOVN_SAT(TOP, true, 1, uint8_t, 2, uint16_t, FN) + +#define DO_VMOVN_SAT_UH(BOP, TOP, FN) \ + DO_VMOVN_SAT(BOP, false, 2, uint16_t, 4, uint32_t, FN) \ + DO_VMOVN_SAT(TOP, true, 2, uint16_t, 4, uint32_t, FN) + +#define DO_VMOVN_SAT_SB(BOP, TOP, FN) \ + DO_VMOVN_SAT(BOP, false, 1, int8_t, 2, int16_t, FN) \ + DO_VMOVN_SAT(TOP, true, 1, int8_t, 2, int16_t, FN) + +#define DO_VMOVN_SAT_SH(BOP, TOP, FN) \ + DO_VMOVN_SAT(BOP, false, 2, int16_t, 4, int32_t, FN) \ + DO_VMOVN_SAT(TOP, true, 2, int16_t, 4, int32_t, FN) + +#define DO_VQMOVN_SB(N, SATP) \ + do_sat_bhs((int64_t)(N), INT8_MIN, INT8_MAX, SATP) +#define DO_VQMOVN_UB(N, SATP) \ + do_sat_bhs((uint64_t)(N), 0, UINT8_MAX, SATP) +#define DO_VQMOVUN_B(N, SATP) \ + do_sat_bhs((int64_t)(N), 0, UINT8_MAX, SATP) + +#define DO_VQMOVN_SH(N, SATP) \ + do_sat_bhs((int64_t)(N), INT16_MIN, INT16_MAX, SATP) +#define DO_VQMOVN_UH(N, SATP) \ + do_sat_bhs((uint64_t)(N), 0, UINT16_MAX, SATP) +#define DO_VQMOVUN_H(N, SATP) \ + do_sat_bhs((int64_t)(N), 0, UINT16_MAX, SATP) + +DO_VMOVN_SAT_SB(vqmovnbsb, vqmovntsb, DO_VQMOVN_SB) +DO_VMOVN_SAT_SH(vqmovnbsh, vqmovntsh, DO_VQMOVN_SH) +DO_VMOVN_SAT_UB(vqmovnbub, vqmovntub, DO_VQMOVN_UB) +DO_VMOVN_SAT_UH(vqmovnbuh, vqmovntuh, DO_VQMOVN_UH) +DO_VMOVN_SAT_SB(vqmovunbb, vqmovuntb, DO_VQMOVUN_B) +DO_VMOVN_SAT_SH(vqmovunbh, vqmovunth, DO_VQMOVUN_H) + uint32_t HELPER(mve_vshlc)(CPUARMState *env, void *vd, uint32_t rdm, uint32_t shift) { diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 247f6719e6..5c3655efc3 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -275,6 +275,28 @@ DO_1OP(VCLS, vcls) DO_1OP(VABS, vabs) DO_1OP(VNEG, vneg) +/* Narrowing moves: only size 0 and 1 are valid */ +#define DO_VMOVN(INSN, FN) \ + static bool trans_##INSN(DisasContext *s, arg_1op *a) \ + { \ + static MVEGenOneOpFn * const fns[] = { \ + gen_helper_mve_##FN##b, \ + gen_helper_mve_##FN##h, \ + NULL, \ + NULL, \ + }; \ + return do_1op(s, a, fns[a->size]); \ + } + +DO_VMOVN(VMOVNB, vmovnb) +DO_VMOVN(VMOVNT, vmovnt) +DO_VMOVN(VQMOVUNB, vqmovunb) +DO_VMOVN(VQMOVUNT, vqmovunt) +DO_VMOVN(VQMOVN_BS, vqmovnbs) +DO_VMOVN(VQMOVN_TS, vqmovnts) +DO_VMOVN(VQMOVN_BU, vqmovnbu) +DO_VMOVN(VQMOVN_TU, vqmovntu) + static bool trans_VREV16(DisasContext *s, arg_1op *a) { static MVEGenOneOpFn * const fns[] = { From 640cdf20a25d0021f4e93b6207b648a973df320b Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:53 +0100 Subject: [PATCH 24/44] target/arm: Rename MVEGenDualAccOpFn to MVEGenLongDualAccOpFn The MVEGenDualAccOpFn is a bit misnamed, since it is used for the "long dual accumulate" operations that use a 64-bit accumulator. Rename it to MVEGenLongDualAccOpFn so we can use the former name for the 32-bit accumulator insns. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/translate-mve.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 5c3655efc3..676411e05c 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -38,7 +38,7 @@ typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr); typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr); typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32); typedef void MVEGenTwoOpShiftFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32); -typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64); +typedef void MVEGenLongDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64); typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32); typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64); typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32); @@ -652,7 +652,7 @@ static bool trans_VQDMULLT_scalar(DisasContext *s, arg_2scalar *a) } static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a, - MVEGenDualAccOpFn *fn) + MVEGenLongDualAccOpFn *fn) { TCGv_ptr qn, qm; TCGv_i64 rda; @@ -710,7 +710,7 @@ static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a, static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a) { - static MVEGenDualAccOpFn * const fns[4][2] = { + static MVEGenLongDualAccOpFn * const fns[4][2] = { { NULL, NULL }, { gen_helper_mve_vmlaldavsh, gen_helper_mve_vmlaldavxsh }, { gen_helper_mve_vmlaldavsw, gen_helper_mve_vmlaldavxsw }, @@ -721,7 +721,7 @@ static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a) static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a) { - static MVEGenDualAccOpFn * const fns[4][2] = { + static MVEGenLongDualAccOpFn * const fns[4][2] = { { NULL, NULL }, { gen_helper_mve_vmlaldavuh, NULL }, { gen_helper_mve_vmlaldavuw, NULL }, @@ -732,7 +732,7 @@ static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a) static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a) { - static MVEGenDualAccOpFn * const fns[4][2] = { + static MVEGenLongDualAccOpFn * const fns[4][2] = { { NULL, NULL }, { gen_helper_mve_vmlsldavsh, gen_helper_mve_vmlsldavxsh }, { gen_helper_mve_vmlsldavsw, gen_helper_mve_vmlsldavxsw }, @@ -743,7 +743,7 @@ static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a) static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a) { - static MVEGenDualAccOpFn * const fns[] = { + static MVEGenLongDualAccOpFn * const fns[] = { gen_helper_mve_vrmlaldavhsw, gen_helper_mve_vrmlaldavhxsw, }; return do_long_dual_acc(s, a, fns[a->x]); @@ -751,7 +751,7 @@ static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a) static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a) { - static MVEGenDualAccOpFn * const fns[] = { + static MVEGenLongDualAccOpFn * const fns[] = { gen_helper_mve_vrmlaldavhuw, NULL, }; return do_long_dual_acc(s, a, fns[a->x]); @@ -759,7 +759,7 @@ static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a) static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a) { - static MVEGenDualAccOpFn * const fns[] = { + static MVEGenLongDualAccOpFn * const fns[] = { gen_helper_mve_vrmlsldavhsw, gen_helper_mve_vrmlsldavhxsw, }; return do_long_dual_acc(s, a, fns[a->x]); From f0ffff5163cb503de236fc766121601592f08744 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:54 +0100 Subject: [PATCH 25/44] target/arm: Implement MVE VMLADAV and VMLSLDAV Implement the MVE VMLADAV and VMLSLDAV insns. Like the VMLALDAV and VMLSLDAV insns already implemented, these accumulate multiplied vector elements; but they accumulate a 32-bit result rather than a 64-bit one. Note that these encodings overlap with what would be RdaHi=0b111 for VMLALDAV, VMLSLDAV, VRMLALDAVH and VRMLSLDAVH. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 17 ++++++++++ target/arm/mve.decode | 33 +++++++++++++++++--- target/arm/mve_helper.c | 41 ++++++++++++++++++++++++ target/arm/translate-mve.c | 64 ++++++++++++++++++++++++++++++++++++++ 4 files changed, 150 insertions(+), 5 deletions(-) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 17484f7432..34d644a519 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -392,6 +392,23 @@ DEF_HELPER_FLAGS_4(mve_vrmlaldavhuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64) DEF_HELPER_FLAGS_4(mve_vrmlsldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64) DEF_HELPER_FLAGS_4(mve_vrmlsldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64) +DEF_HELPER_FLAGS_4(mve_vmladavsb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmladavsh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmladavsw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmladavub, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmladavuh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmladavuw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlsdavb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlsdavh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlsdavw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) + +DEF_HELPER_FLAGS_4(mve_vmladavsxb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmladavsxh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmladavsxw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlsdavxb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlsdavxh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlsdavxw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32) + DEF_HELPER_FLAGS_3(mve_vaddvsb, TCG_CALL_NO_WG, i32, env, ptr, i32) DEF_HELPER_FLAGS_3(mve_vaddvub, TCG_CALL_NO_WG, i32, env, ptr, i32) DEF_HELPER_FLAGS_3(mve_vaddvsh, TCG_CALL_NO_WG, i32, env, ptr, i32) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index d295a693b1..cec5a51b0e 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -320,32 +320,55 @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2 %size_16 16:1 !function=plus_1 &vmlaldav rdahi rdalo size qn qm x a +&vmladav rda size qn qm x a @vmlaldav .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \ qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav @vmlaldav_nosz .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \ qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav -VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav -VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav +@vmladav .... .... .... ... . ... x:1 .... . . a:1 . qm:3 . \ + qn=%qn rda=%rdalo size=%size_16 &vmladav +@vmladav_nosz .... .... .... ... . ... x:1 .... . . a:1 . qm:3 . \ + qn=%qn rda=%rdalo size=0 &vmladav -VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav +{ + VMLADAV_S 1110 1110 1111 ... . ... . 1110 . 0 . 0 ... 0 @vmladav + VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav +} +{ + VMLADAV_U 1111 1110 1111 ... . ... . 1110 . 0 . 0 ... 0 @vmladav + VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav +} + +{ + VMLSDAV 1110 1110 1111 ... . ... . 1110 . 0 . 0 ... 1 @vmladav + VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav +} + +{ + VMLSDAV 1111 1110 1111 ... 0 ... . 1110 . 0 . 0 ... 1 @vmladav_nosz + VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz +} + +VMLADAV_S 1110 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 1 @vmladav_nosz +VMLADAV_U 1111 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 1 @vmladav_nosz { VMAXV_S 1110 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv VMINV_S 1110 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv VMAXAV 1110 1110 1110 .. 00 .... 1111 0 0 . 0 ... 0 @vmaxv VMINAV 1110 1110 1110 .. 00 .... 1111 1 0 . 0 ... 0 @vmaxv + VMLADAV_S 1110 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 0 @vmladav_nosz VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz } { VMAXV_U 1111 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv VMINV_U 1111 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv + VMLADAV_U 1111 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 0 @vmladav_nosz VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz } -VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz - # Scalar operations VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 72c30f360a..ea206c932b 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1189,6 +1189,47 @@ DO_LDAV(vmlsldavxsh, 2, int16_t, true, +=, -=) DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=) DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=) +/* + * Multiply add dual accumulate ops + */ +#define DO_DAV(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC) \ + uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \ + void *vm, uint32_t a) \ + { \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + TYPE *n = vn, *m = vm; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + if (mask & 1) { \ + if (e & 1) { \ + a ODDACC \ + n[H##ESIZE(e - 1 * XCHG)] * m[H##ESIZE(e)]; \ + } else { \ + a EVENACC \ + n[H##ESIZE(e + 1 * XCHG)] * m[H##ESIZE(e)]; \ + } \ + } \ + } \ + mve_advance_vpt(env); \ + return a; \ + } + +#define DO_DAV_S(INSN, XCHG, EVENACC, ODDACC) \ + DO_DAV(INSN##b, 1, int8_t, XCHG, EVENACC, ODDACC) \ + DO_DAV(INSN##h, 2, int16_t, XCHG, EVENACC, ODDACC) \ + DO_DAV(INSN##w, 4, int32_t, XCHG, EVENACC, ODDACC) + +#define DO_DAV_U(INSN, XCHG, EVENACC, ODDACC) \ + DO_DAV(INSN##b, 1, uint8_t, XCHG, EVENACC, ODDACC) \ + DO_DAV(INSN##h, 2, uint16_t, XCHG, EVENACC, ODDACC) \ + DO_DAV(INSN##w, 4, uint32_t, XCHG, EVENACC, ODDACC) + +DO_DAV_S(vmladavs, false, +=, +=) +DO_DAV_U(vmladavu, false, +=, +=) +DO_DAV_S(vmlsdav, false, +=, -=) +DO_DAV_S(vmladavsx, true, +=, +=) +DO_DAV_S(vmlsdavx, true, +=, -=) + /* * Rounding multiply add long dual accumulate high. In the pseudocode * this is implemented with a 72-bit internal accumulator value of which diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 676411e05c..92ed1be83e 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -46,6 +46,7 @@ typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TC typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr); typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32); typedef void MVEGenVABAVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32); +typedef void MVEGenDualAccOpFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32); /* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */ static inline long mve_qreg_offset(unsigned reg) @@ -765,6 +766,69 @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a) return do_long_dual_acc(s, a, fns[a->x]); } +static bool do_dual_acc(DisasContext *s, arg_vmladav *a, MVEGenDualAccOpFn *fn) +{ + TCGv_ptr qn, qm; + TCGv_i32 rda; + + if (!dc_isar_feature(aa32_mve, s) || + !mve_check_qreg_bank(s, a->qn) || + !fn) { + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + qn = mve_qreg_ptr(a->qn); + qm = mve_qreg_ptr(a->qm); + + /* + * This insn is subject to beat-wise execution. Partial execution + * of an A=0 (no-accumulate) insn which does not execute the first + * beat must start with the current rda value, not 0. + */ + if (a->a || mve_skip_first_beat(s)) { + rda = load_reg(s, a->rda); + } else { + rda = tcg_const_i32(0); + } + + fn(rda, cpu_env, qn, qm, rda); + store_reg(s, a->rda, rda); + tcg_temp_free_ptr(qn); + tcg_temp_free_ptr(qm); + + mve_update_eci(s); + return true; +} + +#define DO_DUAL_ACC(INSN, FN) \ + static bool trans_##INSN(DisasContext *s, arg_vmladav *a) \ + { \ + static MVEGenDualAccOpFn * const fns[4][2] = { \ + { gen_helper_mve_##FN##b, gen_helper_mve_##FN##xb }, \ + { gen_helper_mve_##FN##h, gen_helper_mve_##FN##xh }, \ + { gen_helper_mve_##FN##w, gen_helper_mve_##FN##xw }, \ + { NULL, NULL }, \ + }; \ + return do_dual_acc(s, a, fns[a->size][a->x]); \ + } + +DO_DUAL_ACC(VMLADAV_S, vmladavs) +DO_DUAL_ACC(VMLSDAV, vmlsdav) + +static bool trans_VMLADAV_U(DisasContext *s, arg_vmladav *a) +{ + static MVEGenDualAccOpFn * const fns[4][2] = { + { gen_helper_mve_vmladavub, NULL }, + { gen_helper_mve_vmladavuh, NULL }, + { gen_helper_mve_vmladavuw, NULL }, + { NULL, NULL }, + }; + return do_dual_acc(s, a, fns[a->size][a->x]); +} + static void gen_vpst(DisasContext *s, uint32_t mask) { /* From c69e34c6debfb567f6118b59e6efa96a20765dda Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:54 +0100 Subject: [PATCH 26/44] target/arm: Implement MVE VMLA Implement the MVE VMLA insn, which multiplies a vector by a scalar and accumulates into another vector. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 4 ++++ target/arm/mve.decode | 1 + target/arm/mve_helper.c | 5 +++++ target/arm/translate-mve.c | 1 + 4 files changed, 11 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 34d644a519..328e31e266 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -367,6 +367,10 @@ DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i3 DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlab, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlah, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vmlaw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index cec5a51b0e..cd9c806a11 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -413,6 +413,7 @@ VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar # The U bit (28) is don't-care because it does not affect the result +VMLA 111- 1110 0 . .. ... 1 ... 0 1110 . 100 .... @2scalar VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar # Vector add across vector diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index ea206c932b..8004b9bb72 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -1008,6 +1008,11 @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B) DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H) DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W) +/* Vector by scalar plus vector */ +#define DO_VMLA(D, N, M) ((N) * (M) + (D)) + +DO_2OP_ACC_SCALAR_U(vmla, DO_VMLA) + /* Vector by vector plus scalar */ #define DO_VMLAS(D, N, M) ((N) * (D) + (M)) diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 92ed1be83e..f8899af352 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -620,6 +620,7 @@ DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar) DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar) DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar) DO_2OP_SCALAR(VBRSR, vbrsr) +DO_2OP_SCALAR(VMLA, vmla) DO_2OP_SCALAR(VMLAS, vmlas) static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a) From 8be9a25058f9a5505d6864f06de86ee01d42fc59 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:54 +0100 Subject: [PATCH 27/44] target/arm: Implement MVE saturating doubling multiply accumulates Implement the MVE saturating doubling multiply accumulate insns VQDMLAH, VQRDMLAH, VQDMLASH and VQRDMLASH. These perform a multiply, double, add the accumulator shifted by the element size, possibly round, saturate to twice the element size, then take the high half of the result. The *MLAH insns do vector * scalar + vector, and the *MLASH insns do vector * vector + scalar. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 16 +++++++ target/arm/mve.decode | 5 ++ target/arm/mve_helper.c | 95 ++++++++++++++++++++++++++++++++++++++ target/arm/translate-mve.c | 4 ++ 4 files changed, 120 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 328e31e266..2f54396b2d 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -375,6 +375,22 @@ DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqdmlahb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqdmlahh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqdmlahw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + +DEF_HELPER_FLAGS_4(mve_vqrdmlahb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqrdmlahh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqrdmlahw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + +DEF_HELPER_FLAGS_4(mve_vqdmlashb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqdmlashh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqdmlashw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + +DEF_HELPER_FLAGS_4(mve_vqrdmlashb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqrdmlashh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vqrdmlashw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64) DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64) DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index cd9c806a11..7a6de3991b 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -416,6 +416,11 @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar VMLA 111- 1110 0 . .. ... 1 ... 0 1110 . 100 .... @2scalar VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar +VQRDMLAH 1110 1110 0 . .. ... 0 ... 0 1110 . 100 .... @2scalar +VQRDMLASH 1110 1110 0 . .. ... 0 ... 1 1110 . 100 .... @2scalar +VQDMLAH 1110 1110 0 . .. ... 0 ... 0 1110 . 110 .... @2scalar +VQDMLASH 1110 1110 0 . .. ... 0 ... 1 1110 . 110 .... @2scalar + # Vector add across vector { VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 8004b9bb72..a69fcd2243 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -964,6 +964,28 @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w) mve_advance_vpt(env); \ } +#define DO_2OP_SAT_ACC_SCALAR(OP, ESIZE, TYPE, FN) \ + void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \ + uint32_t rm) \ + { \ + TYPE *d = vd, *n = vn; \ + TYPE m = rm; \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + bool qc = false; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + bool sat = false; \ + mergemask(&d[H##ESIZE(e)], \ + FN(d[H##ESIZE(e)], n[H##ESIZE(e)], m, &sat), \ + mask); \ + qc |= sat & mask & 1; \ + } \ + if (qc) { \ + env->vfp.qc[0] = qc; \ + } \ + mve_advance_vpt(env); \ + } + /* provide unsigned 2-op scalar helpers for all sizes */ #define DO_2OP_SCALAR_U(OP, FN) \ DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \ @@ -1008,6 +1030,79 @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B) DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H) DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W) +static int8_t do_vqdmlah_b(int8_t a, int8_t b, int8_t c, int round, bool *sat) +{ + int64_t r = (int64_t)a * b * 2 + ((int64_t)c << 8) + (round << 7); + return do_sat_bhw(r, INT16_MIN, INT16_MAX, sat) >> 8; +} + +static int16_t do_vqdmlah_h(int16_t a, int16_t b, int16_t c, + int round, bool *sat) +{ + int64_t r = (int64_t)a * b * 2 + ((int64_t)c << 16) + (round << 15); + return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat) >> 16; +} + +static int32_t do_vqdmlah_w(int32_t a, int32_t b, int32_t c, + int round, bool *sat) +{ + /* + * Architecturally we should do the entire add, double, round + * and then check for saturation. We do three saturating adds, + * but we need to be careful about the order. If the first + * m1 + m2 saturates then it's impossible for the *2+rc to + * bring it back into the non-saturated range. However, if + * m1 + m2 is negative then it's possible that doing the doubling + * would take the intermediate result below INT64_MAX and the + * addition of the rounding constant then brings it back in range. + * So we add half the rounding constant and half the "c << esize" + * before doubling rather than adding the rounding constant after + * the doubling. + */ + int64_t m1 = (int64_t)a * b; + int64_t m2 = (int64_t)c << 31; + int64_t r; + if (sadd64_overflow(m1, m2, &r) || + sadd64_overflow(r, (round << 30), &r) || + sadd64_overflow(r, r, &r)) { + *sat = true; + return r < 0 ? INT32_MAX : INT32_MIN; + } + return r >> 32; +} + +/* + * The *MLAH insns are vector * scalar + vector; + * the *MLASH insns are vector * vector + scalar + */ +#define DO_VQDMLAH_B(D, N, M, S) do_vqdmlah_b(N, M, D, 0, S) +#define DO_VQDMLAH_H(D, N, M, S) do_vqdmlah_h(N, M, D, 0, S) +#define DO_VQDMLAH_W(D, N, M, S) do_vqdmlah_w(N, M, D, 0, S) +#define DO_VQRDMLAH_B(D, N, M, S) do_vqdmlah_b(N, M, D, 1, S) +#define DO_VQRDMLAH_H(D, N, M, S) do_vqdmlah_h(N, M, D, 1, S) +#define DO_VQRDMLAH_W(D, N, M, S) do_vqdmlah_w(N, M, D, 1, S) + +#define DO_VQDMLASH_B(D, N, M, S) do_vqdmlah_b(N, D, M, 0, S) +#define DO_VQDMLASH_H(D, N, M, S) do_vqdmlah_h(N, D, M, 0, S) +#define DO_VQDMLASH_W(D, N, M, S) do_vqdmlah_w(N, D, M, 0, S) +#define DO_VQRDMLASH_B(D, N, M, S) do_vqdmlah_b(N, D, M, 1, S) +#define DO_VQRDMLASH_H(D, N, M, S) do_vqdmlah_h(N, D, M, 1, S) +#define DO_VQRDMLASH_W(D, N, M, S) do_vqdmlah_w(N, D, M, 1, S) + +DO_2OP_SAT_ACC_SCALAR(vqdmlahb, 1, int8_t, DO_VQDMLAH_B) +DO_2OP_SAT_ACC_SCALAR(vqdmlahh, 2, int16_t, DO_VQDMLAH_H) +DO_2OP_SAT_ACC_SCALAR(vqdmlahw, 4, int32_t, DO_VQDMLAH_W) +DO_2OP_SAT_ACC_SCALAR(vqrdmlahb, 1, int8_t, DO_VQRDMLAH_B) +DO_2OP_SAT_ACC_SCALAR(vqrdmlahh, 2, int16_t, DO_VQRDMLAH_H) +DO_2OP_SAT_ACC_SCALAR(vqrdmlahw, 4, int32_t, DO_VQRDMLAH_W) + +DO_2OP_SAT_ACC_SCALAR(vqdmlashb, 1, int8_t, DO_VQDMLASH_B) +DO_2OP_SAT_ACC_SCALAR(vqdmlashh, 2, int16_t, DO_VQDMLASH_H) +DO_2OP_SAT_ACC_SCALAR(vqdmlashw, 4, int32_t, DO_VQDMLASH_W) +DO_2OP_SAT_ACC_SCALAR(vqrdmlashb, 1, int8_t, DO_VQRDMLASH_B) +DO_2OP_SAT_ACC_SCALAR(vqrdmlashh, 2, int16_t, DO_VQRDMLASH_H) +DO_2OP_SAT_ACC_SCALAR(vqrdmlashw, 4, int32_t, DO_VQRDMLASH_W) + /* Vector by scalar plus vector */ #define DO_VMLA(D, N, M) ((N) * (M) + (D)) diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index f8899af352..e3e115c1aa 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -622,6 +622,10 @@ DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar) DO_2OP_SCALAR(VBRSR, vbrsr) DO_2OP_SCALAR(VMLA, vmla) DO_2OP_SCALAR(VMLAS, vmlas) +DO_2OP_SCALAR(VQDMLAH, vqdmlah) +DO_2OP_SCALAR(VQRDMLAH, vqrdmlah) +DO_2OP_SCALAR(VQDMLASH, vqdmlash) +DO_2OP_SCALAR(VQRDMLASH, vqrdmlash) static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a) { From 398e7cd3cd7a82eb04d236c7e30171f058f234b7 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:55 +0100 Subject: [PATCH 28/44] target/arm: Implement MVE VQABS, VQNEG Implement the MVE 1-operand saturating operations VQABS and VQNEG. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 8 ++++++++ target/arm/mve.decode | 3 +++ target/arm/mve_helper.c | 37 +++++++++++++++++++++++++++++++++++++ target/arm/translate-mve.c | 2 ++ 4 files changed, 50 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 2f54396b2d..f9345bfafc 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -76,6 +76,14 @@ DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqabsb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqabsh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqabsw, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vqnegb, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqnegh, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vqnegw, TCG_CALL_NO_WG, void, env, ptr, ptr) + DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 7a6de3991b..a05b882f9d 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -279,6 +279,9 @@ VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op VNEG 1111 1111 1 . 11 .. 01 ... 0 0011 11 . 0 ... 0 @1op VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op +VQABS 1111 1111 1 . 11 .. 00 ... 0 0111 01 . 0 ... 0 @1op +VQNEG 1111 1111 1 . 11 .. 00 ... 0 0111 11 . 0 ... 0 @1op + &vdup qd rt size # Qd is in the fields usually named Qn @vdup .... .... . . .. ... . rt:4 .... . . . . .... qd=%qn &vdup diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index a69fcd2243..6539012ddd 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -2200,3 +2200,40 @@ void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm) } mve_advance_vpt(env); } + +#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \ + void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \ + { \ + TYPE *d = vd, *m = vm; \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + bool qc = false; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + bool sat = false; \ + mergemask(&d[H##ESIZE(e)], FN(m[H##ESIZE(e)], &sat), mask); \ + qc |= sat & mask & 1; \ + } \ + if (qc) { \ + env->vfp.qc[0] = qc; \ + } \ + mve_advance_vpt(env); \ + } + +#define DO_VQABS_B(N, SATP) \ + do_sat_bhs(DO_ABS((int64_t)N), INT8_MIN, INT8_MAX, SATP) +#define DO_VQABS_H(N, SATP) \ + do_sat_bhs(DO_ABS((int64_t)N), INT16_MIN, INT16_MAX, SATP) +#define DO_VQABS_W(N, SATP) \ + do_sat_bhs(DO_ABS((int64_t)N), INT32_MIN, INT32_MAX, SATP) + +#define DO_VQNEG_B(N, SATP) do_sat_bhs(-(int64_t)N, INT8_MIN, INT8_MAX, SATP) +#define DO_VQNEG_H(N, SATP) do_sat_bhs(-(int64_t)N, INT16_MIN, INT16_MAX, SATP) +#define DO_VQNEG_W(N, SATP) do_sat_bhs(-(int64_t)N, INT32_MIN, INT32_MAX, SATP) + +DO_1OP_SAT(vqabsb, 1, int8_t, DO_VQABS_B) +DO_1OP_SAT(vqabsh, 2, int16_t, DO_VQABS_H) +DO_1OP_SAT(vqabsw, 4, int32_t, DO_VQABS_W) + +DO_1OP_SAT(vqnegb, 1, int8_t, DO_VQNEG_B) +DO_1OP_SAT(vqnegh, 2, int16_t, DO_VQNEG_H) +DO_1OP_SAT(vqnegw, 4, int32_t, DO_VQNEG_W) diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index e3e115c1aa..f2213ec8cd 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -275,6 +275,8 @@ DO_1OP(VCLZ, vclz) DO_1OP(VCLS, vcls) DO_1OP(VABS, vabs) DO_1OP(VNEG, vneg) +DO_1OP(VQABS, vqabs) +DO_1OP(VQNEG, vqneg) /* Narrowing moves: only size 0 and 1 are valid */ #define DO_VMOVN(INSN, FN) \ From d5c571ea6d1558934b0d1a95c51a2c084cf4fd85 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:55 +0100 Subject: [PATCH 29/44] target/arm: Implement MVE VMAXA, VMINA Implement the MVE VMAXA and VMINA insns, which take the absolute value of the signed elements in the input vector and then accumulate the unsigned max or min into the destination vector. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 8 ++++++++ target/arm/mve.decode | 4 ++++ target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++ target/arm/translate-mve.c | 2 ++ 4 files changed, 40 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index f9345bfafc..651020aaad 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -84,6 +84,14 @@ DEF_HELPER_FLAGS_3(mve_vqnegb, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vqnegh, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vqnegw, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vmaxab, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vmaxah, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vmaxaw, TCG_CALL_NO_WG, void, env, ptr, ptr) + +DEF_HELPER_FLAGS_3(mve_vminab, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vminah, TCG_CALL_NO_WG, void, env, ptr, ptr) +DEF_HELPER_FLAGS_3(mve_vminaw, TCG_CALL_NO_WG, void, env, ptr, ptr) + DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr) DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index a05b882f9d..0955ed0cc2 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -156,6 +156,8 @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op VQMOVUNB 111 0 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op VQMOVN_BS 111 0 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op + VMAXA 111 0 1110 0 . 11 .. 11 ... 0 1110 1 0 . 0 ... 1 @1op + VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op } @@ -176,6 +178,8 @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op VQMOVUNT 111 0 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op VQMOVN_TS 111 0 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op + VMINA 111 0 1110 0 . 11 .. 11 ... 1 1110 1 0 . 0 ... 1 @1op + VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op } diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 6539012ddd..d326205cbf 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -2237,3 +2237,29 @@ DO_1OP_SAT(vqabsw, 4, int32_t, DO_VQABS_W) DO_1OP_SAT(vqnegb, 1, int8_t, DO_VQNEG_B) DO_1OP_SAT(vqnegh, 2, int16_t, DO_VQNEG_H) DO_1OP_SAT(vqnegw, 4, int32_t, DO_VQNEG_W) + +/* + * VMAXA, VMINA: vd is unsigned; vm is signed, and we take its + * absolute value; we then do an unsigned comparison. + */ +#define DO_VMAXMINA(OP, ESIZE, STYPE, UTYPE, FN) \ + void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \ + { \ + UTYPE *d = vd; \ + STYPE *m = vm; \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + UTYPE r = DO_ABS(m[H##ESIZE(e)]); \ + r = FN(d[H##ESIZE(e)], r); \ + mergemask(&d[H##ESIZE(e)], r, mask); \ + } \ + mve_advance_vpt(env); \ + } + +DO_VMAXMINA(vmaxab, 1, int8_t, uint8_t, DO_MAX) +DO_VMAXMINA(vmaxah, 2, int16_t, uint16_t, DO_MAX) +DO_VMAXMINA(vmaxaw, 4, int32_t, uint32_t, DO_MAX) +DO_VMAXMINA(vminab, 1, int8_t, uint8_t, DO_MIN) +DO_VMAXMINA(vminah, 2, int16_t, uint16_t, DO_MIN) +DO_VMAXMINA(vminaw, 4, int32_t, uint32_t, DO_MIN) diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index f2213ec8cd..02c26987a2 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -277,6 +277,8 @@ DO_1OP(VABS, vabs) DO_1OP(VNEG, vneg) DO_1OP(VQABS, vqabs) DO_1OP(VQNEG, vqneg) +DO_1OP(VMAXA, vmaxa) +DO_1OP(VMINA, vmina) /* Narrowing moves: only size 0 and 1 are valid */ #define DO_VMOVN(INSN, FN) \ From 1241f148d52eea7c9350df918da0eafdfc539327 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:55 +0100 Subject: [PATCH 30/44] target/arm: Implement MVE VMOV to/from 2 general-purpose registers Implement the MVE VMOV forms that move data between 2 general-purpose registers and 2 32-bit lanes in a vector register. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/mve.decode | 4 ++ target/arm/translate-a32.h | 1 + target/arm/translate-mve.c | 85 ++++++++++++++++++++++++++++++++++++++ target/arm/translate-vfp.c | 2 +- 4 files changed, 91 insertions(+), 1 deletion(-) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 0955ed0cc2..774ee2a1a5 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -136,6 +136,10 @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \ size=2 p=1 +# Moves between 2 32-bit vector lanes and 2 general purpose registers +VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd +VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd + # Vector 2-op VAND 1110 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz VBIC 1110 1111 0 . 01 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h index 6dfcafe179..6f4d65ddb0 100644 --- a/target/arm/translate-a32.h +++ b/target/arm/translate-a32.h @@ -49,6 +49,7 @@ void gen_rev16(TCGv_i32 dest, TCGv_i32 var); void clear_eci_state(DisasContext *s); bool mve_eci_check(DisasContext *s); void mve_update_and_store_eci(DisasContext *s); +bool mve_skip_vmov(DisasContext *s, int vn, int index, int size); static inline TCGv_i32 load_cpu_offset(int offset) { diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 02c26987a2..93707fdd68 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -1507,3 +1507,88 @@ static bool do_vabav(DisasContext *s, arg_vabav *a, MVEGenVABAVFn *fn) DO_VABAV(VABAV_S, vabavs) DO_VABAV(VABAV_U, vabavu) + +static bool trans_VMOV_to_2gp(DisasContext *s, arg_VMOV_to_2gp *a) +{ + /* + * VMOV two 32-bit vector lanes to two general-purpose registers. + * This insn is not predicated but it is subject to beat-wise + * execution if it is not in an IT block. For us this means + * only that if PSR.ECI says we should not be executing the beat + * corresponding to the lane of the vector register being accessed + * then we should skip perfoming the move, and that we need to do + * the usual check for bad ECI state and advance of ECI state. + * (If PSR.ECI is non-zero then we cannot be in an IT block.) + */ + TCGv_i32 tmp; + int vd; + + if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd) || + a->rt == 13 || a->rt == 15 || a->rt2 == 13 || a->rt2 == 15 || + a->rt == a->rt2) { + /* Rt/Rt2 cases are UNPREDICTABLE */ + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + /* Convert Qreg index to Dreg for read_neon_element32() etc */ + vd = a->qd * 2; + + if (!mve_skip_vmov(s, vd, a->idx, MO_32)) { + tmp = tcg_temp_new_i32(); + read_neon_element32(tmp, vd, a->idx, MO_32); + store_reg(s, a->rt, tmp); + } + if (!mve_skip_vmov(s, vd + 1, a->idx, MO_32)) { + tmp = tcg_temp_new_i32(); + read_neon_element32(tmp, vd + 1, a->idx, MO_32); + store_reg(s, a->rt2, tmp); + } + + mve_update_and_store_eci(s); + return true; +} + +static bool trans_VMOV_from_2gp(DisasContext *s, arg_VMOV_to_2gp *a) +{ + /* + * VMOV two general-purpose registers to two 32-bit vector lanes. + * This insn is not predicated but it is subject to beat-wise + * execution if it is not in an IT block. For us this means + * only that if PSR.ECI says we should not be executing the beat + * corresponding to the lane of the vector register being accessed + * then we should skip perfoming the move, and that we need to do + * the usual check for bad ECI state and advance of ECI state. + * (If PSR.ECI is non-zero then we cannot be in an IT block.) + */ + TCGv_i32 tmp; + int vd; + + if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd) || + a->rt == 13 || a->rt == 15 || a->rt2 == 13 || a->rt2 == 15) { + /* Rt/Rt2 cases are UNPREDICTABLE */ + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + /* Convert Qreg idx to Dreg for read_neon_element32() etc */ + vd = a->qd * 2; + + if (!mve_skip_vmov(s, vd, a->idx, MO_32)) { + tmp = load_reg(s, a->rt); + write_neon_element32(tmp, vd, a->idx, MO_32); + tcg_temp_free_i32(tmp); + } + if (!mve_skip_vmov(s, vd + 1, a->idx, MO_32)) { + tmp = load_reg(s, a->rt2); + write_neon_element32(tmp, vd + 1, a->idx, MO_32); + tcg_temp_free_i32(tmp); + } + + mve_update_and_store_eci(s); + return true; +} diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c index b2991e21ec..e2eb797c82 100644 --- a/target/arm/translate-vfp.c +++ b/target/arm/translate-vfp.c @@ -581,7 +581,7 @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a) return true; } -static bool mve_skip_vmov(DisasContext *s, int vn, int index, int size) +bool mve_skip_vmov(DisasContext *s, int vn, int index, int size) { /* * In a CPU with MVE, the VMOV (vector lane to general-purpose register) From fea3958fa11c75b4f3f335ac0ce4cfc5cf0af7de Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:56 +0100 Subject: [PATCH 31/44] target/arm: Implement MVE VPNOT Implement the MVE VPNOT insn, which inverts the bits in VPR.P0 (subject to both predication and to beatwise execution). Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 1 + target/arm/mve.decode | 1 + target/arm/mve_helper.c | 17 +++++++++++++++++ target/arm/translate-mve.c | 19 +++++++++++++++++++ 4 files changed, 38 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 651020aaad..8cb941912f 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -119,6 +119,7 @@ DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) +DEF_HELPER_FLAGS_1(mve_vpnot, TCG_CALL_NO_WG, void, env) DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 774ee2a1a5..40bd0c04b5 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -571,6 +571,7 @@ VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp { + VPNOT 1111 1110 0 0 11 000 1 000 0 1111 0100 1101 VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13 VCMPEQ_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 0 0 .... @vcmp_scalar } diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index d326205cbf..c22a00c5ed 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -2201,6 +2201,23 @@ void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm) mve_advance_vpt(env); } +void HELPER(mve_vpnot)(CPUARMState *env) +{ + /* + * P0 bits for unexecuted beats (where eci_mask is 0) are unchanged. + * P0 bits for predicated lanes in executed bits (where mask is 0) are 0. + * P0 bits otherwise are inverted. + * (This is the same logic as VCMP.) + * This insn is itself subject to predication and to beat-wise execution, + * and after it executes VPT state advances in the usual way. + */ + uint16_t mask = mve_element_mask(env); + uint16_t eci_mask = mve_eci_mask(env); + uint16_t beatpred = ~env->v7m.vpr & mask; + env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | (beatpred & eci_mask); + mve_advance_vpt(env); +} + #define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \ { \ diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 93707fdd68..cc2e58cfe2 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -887,6 +887,25 @@ static bool trans_VPST(DisasContext *s, arg_VPST *a) return true; } +static bool trans_VPNOT(DisasContext *s, arg_VPNOT *a) +{ + /* + * Invert the predicate in VPR.P0. We have call out to + * a helper because this insn itself is beatwise and can + * be predicated. + */ + if (!dc_isar_feature(aa32_mve, s)) { + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + gen_helper_mve_vpnot(cpu_env); + mve_update_eci(s); + return true; +} + static bool trans_VADDV(DisasContext *s, arg_VADDV *a) { /* VADDV: vector add across vector */ From 0f31e37c7f0b9577c6ce46304158ccd7c935006b Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:56 +0100 Subject: [PATCH 32/44] target/arm: Implement MVE VCTP Implement the MVE VCTP insn, which sets the VPR.P0 predicate bits so as to predicate any element at index Rn or greater is predicated. As with VPNOT, this insn itself is predicable and subject to beatwise execution. The calculation of the mask is the same as is used to determine ltpmask in mve_element_mask(), but we precalculate masklen in generated code to avoid having to have 4 helpers specialized by size. We put the decode line in with the low-overhead-loop insns in t32.decode because it's logically part of that collection of insn patterns, even though it is an MVE only insn. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 2 ++ target/arm/mve_helper.c | 20 ++++++++++++++++++++ target/arm/t32.decode | 1 + target/arm/translate-a32.h | 1 + target/arm/translate-mve.c | 2 +- target/arm/translate.c | 33 +++++++++++++++++++++++++++++++++ 6 files changed, 58 insertions(+), 1 deletion(-) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index 8cb941912f..b6cf3f0c94 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -121,6 +121,8 @@ DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_1(mve_vpnot, TCG_CALL_NO_WG, void, env) +DEF_HELPER_FLAGS_2(mve_vctp, TCG_CALL_NO_WG, void, env, i32) + DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr) diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index c22a00c5ed..1752555a21 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -2218,6 +2218,26 @@ void HELPER(mve_vpnot)(CPUARMState *env) mve_advance_vpt(env); } +/* + * VCTP: P0 unexecuted bits unchanged, predicated bits zeroed, + * otherwise set according to value of Rn. The calculation of + * newmask here works in the same way as the calculation of the + * ltpmask in mve_element_mask(), but we have pre-calculated + * the masklen in the generated code. + */ +void HELPER(mve_vctp)(CPUARMState *env, uint32_t masklen) +{ + uint16_t mask = mve_element_mask(env); + uint16_t eci_mask = mve_eci_mask(env); + uint16_t newmask; + + assert(masklen <= 16); + newmask = masklen ? MAKE_64BIT_MASK(0, masklen) : 0; + newmask &= mask; + env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | (newmask & eci_mask); + mve_advance_vpt(env); +} + #define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \ { \ diff --git a/target/arm/t32.decode b/target/arm/t32.decode index 2d47f31f14..78fadef9d6 100644 --- a/target/arm/t32.decode +++ b/target/arm/t32.decode @@ -748,5 +748,6 @@ BL 1111 0. .......... 11.1 ............ @branch24 # This is DLSTP DLS 1111 0 0000 0 size:2 rn:4 1110 0000 0000 0001 } + VCTP 1111 0 0000 0 size:2 rn:4 1110 1000 0000 0001 ] } diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h index 6f4d65ddb0..88f15df60e 100644 --- a/target/arm/translate-a32.h +++ b/target/arm/translate-a32.h @@ -48,6 +48,7 @@ long neon_element_offset(int reg, int element, MemOp memop); void gen_rev16(TCGv_i32 dest, TCGv_i32 var); void clear_eci_state(DisasContext *s); bool mve_eci_check(DisasContext *s); +void mve_update_eci(DisasContext *s); void mve_update_and_store_eci(DisasContext *s); bool mve_skip_vmov(DisasContext *s, int vn, int index, int size); diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index cc2e58cfe2..865d5acbe7 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -93,7 +93,7 @@ bool mve_eci_check(DisasContext *s) } } -static void mve_update_eci(DisasContext *s) +void mve_update_eci(DisasContext *s) { /* * The helper function will always update the CPUState field, diff --git a/target/arm/translate.c b/target/arm/translate.c index 80c282669f..804a53279b 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -8669,6 +8669,39 @@ static bool trans_LCTP(DisasContext *s, arg_LCTP *a) return true; } +static bool trans_VCTP(DisasContext *s, arg_VCTP *a) +{ + /* + * M-profile Create Vector Tail Predicate. This insn is itself + * predicated and is subject to beatwise execution. + */ + TCGv_i32 rn_shifted, masklen; + + if (!dc_isar_feature(aa32_mve, s) || a->rn == 13 || a->rn == 15) { + return false; + } + + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + /* + * We pre-calculate the mask length here to avoid having + * to have multiple helpers specialized for size. + * We pass the helper "rn <= (1 << (4 - size)) ? (rn << size) : 16". + */ + rn_shifted = tcg_temp_new_i32(); + masklen = load_reg(s, a->rn); + tcg_gen_shli_i32(rn_shifted, masklen, a->size); + tcg_gen_movcond_i32(TCG_COND_LEU, masklen, + masklen, tcg_constant_i32(1 << (4 - a->size)), + rn_shifted, tcg_constant_i32(16)); + gen_helper_mve_vctp(cpu_env, masklen); + tcg_temp_free_i32(masklen); + tcg_temp_free_i32(rn_shifted); + mve_update_eci(s); + return true; +} static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half) { From dc18628b1833157a50a424cb6b83b63eca560402 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:56 +0100 Subject: [PATCH 33/44] target/arm: Implement MVE scatter-gather insns Implement the MVE gather-loads and scatter-stores which form the address by adding a base value from a scalar register to an offset in each element of a vector. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 32 +++++++++ target/arm/mve.decode | 12 ++++ target/arm/mve_helper.c | 129 +++++++++++++++++++++++++++++++++++++ target/arm/translate-mve.c | 97 ++++++++++++++++++++++++++++ 4 files changed, 270 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index b6cf3f0c94..ba842b97c1 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -33,6 +33,38 @@ DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32) DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32) DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrb_sg_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrb_sg_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrh_sg_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + +DEF_HELPER_FLAGS_4(mve_vldrb_sg_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrb_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrb_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrh_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrh_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrw_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrd_sg_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + +DEF_HELPER_FLAGS_4(mve_vstrb_sg_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrb_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrb_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrh_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrh_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrw_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrd_sg_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + +DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + +DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + +DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32) DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 40bd0c04b5..6c3f45c719 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -42,11 +42,18 @@ &shl_scalar qda rm size &vmaxv qm rda size &vabav qn qm rda size +&vldst_sg qd qm rn size msize os + +# scatter-gather memory size is in bits 6:4 +%sg_msize 6:1 4:1 @vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0 # Note that both Rn and Qd are 3 bits only (no D bit) @vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr +@vldst_sg .... .... .... rn:4 .... ... size:2 ... ... os:1 &vldst_sg \ + qd=%qd qm=%qm msize=%sg_msize + @1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm @1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0 @2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn @@ -136,6 +143,11 @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \ size=2 p=1 +# gather loads/scatter stores +VLDR_S_sg 111 0 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg +VLDR_U_sg 111 1 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg +VSTR_sg 111 0 1100 1 . 00 .... ... 0 111 . .... .... @vldst_sg + # Moves between 2 32-bit vector lanes and 2 general purpose registers VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 1752555a21..2b882db1c3 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -206,6 +206,135 @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t) #undef DO_VLDR #undef DO_VSTR +/* + * Gather loads/scatter stores. Here each element of Qm specifies + * an offset to use from the base register Rm. In the _os_ versions + * that offset is scaled by the element size. + * For loads, predicated lanes are zeroed instead of retaining + * their previous values. + */ +#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN) \ + void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \ + uint32_t base) \ + { \ + TYPE *d = vd; \ + OFFTYPE *m = vm; \ + uint16_t mask = mve_element_mask(env); \ + uint16_t eci_mask = mve_eci_mask(env); \ + unsigned e; \ + uint32_t addr; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE, eci_mask >>= ESIZE) { \ + if (!(eci_mask & 1)) { \ + continue; \ + } \ + addr = ADDRFN(base, m[H##ESIZE(e)]); \ + d[H##ESIZE(e)] = (mask & 1) ? \ + cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \ + } \ + mve_advance_vpt(env); \ + } + +/* We know here TYPE is unsigned so always the same as the offset type */ +#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN) \ + void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \ + uint32_t base) \ + { \ + TYPE *d = vd; \ + TYPE *m = vm; \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + uint32_t addr; \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + addr = ADDRFN(base, m[H##ESIZE(e)]); \ + if (mask & 1) { \ + cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \ + } \ + } \ + mve_advance_vpt(env); \ + } + +/* + * 64-bit accesses are slightly different: they are done as two 32-bit + * accesses, controlled by the predicate mask for the relevant beat, + * and with a single 32-bit offset in the first of the two Qm elements. + * Note that for QEMU our IMPDEF AIRCR.ENDIANNESS is always 0 (little). + */ +#define DO_VLDR64_SG(OP, ADDRFN) \ + void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \ + uint32_t base) \ + { \ + uint32_t *d = vd; \ + uint32_t *m = vm; \ + uint16_t mask = mve_element_mask(env); \ + uint16_t eci_mask = mve_eci_mask(env); \ + unsigned e; \ + uint32_t addr; \ + for (e = 0; e < 16 / 4; e++, mask >>= 4, eci_mask >>= 4) { \ + if (!(eci_mask & 1)) { \ + continue; \ + } \ + addr = ADDRFN(base, m[H4(e & ~1)]); \ + addr += 4 * (e & 1); \ + d[H4(e)] = (mask & 1) ? cpu_ldl_data_ra(env, addr, GETPC()) : 0; \ + } \ + mve_advance_vpt(env); \ + } + +#define DO_VSTR64_SG(OP, ADDRFN) \ + void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \ + uint32_t base) \ + { \ + uint32_t *d = vd; \ + uint32_t *m = vm; \ + uint16_t mask = mve_element_mask(env); \ + unsigned e; \ + uint32_t addr; \ + for (e = 0; e < 16 / 4; e++, mask >>= 4) { \ + addr = ADDRFN(base, m[H4(e & ~1)]); \ + addr += 4 * (e & 1); \ + if (mask & 1) { \ + cpu_stl_data_ra(env, addr, d[H4(e)], GETPC()); \ + } \ + } \ + mve_advance_vpt(env); \ + } + +#define ADDR_ADD(BASE, OFFSET) ((BASE) + (OFFSET)) +#define ADDR_ADD_OSH(BASE, OFFSET) ((BASE) + ((OFFSET) << 1)) +#define ADDR_ADD_OSW(BASE, OFFSET) ((BASE) + ((OFFSET) << 2)) +#define ADDR_ADD_OSD(BASE, OFFSET) ((BASE) + ((OFFSET) << 3)) + +DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD) +DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD) +DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD) + +DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD) +DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD) +DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD) +DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD) +DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD) +DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD) +DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD) + +DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH) +DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH) +DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH) +DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW) +DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD) + +DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD) +DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD) +DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD) +DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD) +DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD) +DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD) +DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD) + +DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH) +DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH) +DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW) +DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD) + /* * The mergemask(D, R, M) macro performs the operation "*D = R" but * storing only the bytes which correspond to 1 bits in M, diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 865d5acbe7..24d4e57ead 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -34,6 +34,7 @@ static inline int vidup_imm(DisasContext *s, int x) #include "decode-mve.c.inc" typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32); +typedef void MVEGenLdStSGFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32); typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr); typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr); typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32); @@ -209,6 +210,102 @@ DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h, MO_8) DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w, MO_8) DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w, MO_16) +static bool do_ldst_sg(DisasContext *s, arg_vldst_sg *a, MVEGenLdStSGFn fn) +{ + TCGv_i32 addr; + TCGv_ptr qd, qm; + + if (!dc_isar_feature(aa32_mve, s) || + !mve_check_qreg_bank(s, a->qd | a->qm) || + !fn || a->rn == 15) { + /* Rn case is UNPREDICTABLE */ + return false; + } + + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + addr = load_reg(s, a->rn); + + qd = mve_qreg_ptr(a->qd); + qm = mve_qreg_ptr(a->qm); + fn(cpu_env, qd, qm, addr); + tcg_temp_free_ptr(qd); + tcg_temp_free_ptr(qm); + tcg_temp_free_i32(addr); + mve_update_eci(s); + return true; +} + +/* + * The naming scheme here is "vldrb_sg_sh == in-memory byte loads + * signextended to halfword elements in register". _os_ indicates that + * the offsets in Qm should be scaled by the element size. + */ +/* This macro is just to make the arrays more compact in these functions */ +#define F(N) gen_helper_mve_##N + +/* VLDRB/VSTRB (ie msize 1) with OS=1 is UNPREDICTABLE; we UNDEF */ +static bool trans_VLDR_S_sg(DisasContext *s, arg_vldst_sg *a) +{ + static MVEGenLdStSGFn * const fns[2][4][4] = { { + { NULL, F(vldrb_sg_sh), F(vldrb_sg_sw), NULL }, + { NULL, NULL, F(vldrh_sg_sw), NULL }, + { NULL, NULL, NULL, NULL }, + { NULL, NULL, NULL, NULL } + }, { + { NULL, NULL, NULL, NULL }, + { NULL, NULL, F(vldrh_sg_os_sw), NULL }, + { NULL, NULL, NULL, NULL }, + { NULL, NULL, NULL, NULL } + } + }; + if (a->qd == a->qm) { + return false; /* UNPREDICTABLE */ + } + return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]); +} + +static bool trans_VLDR_U_sg(DisasContext *s, arg_vldst_sg *a) +{ + static MVEGenLdStSGFn * const fns[2][4][4] = { { + { F(vldrb_sg_ub), F(vldrb_sg_uh), F(vldrb_sg_uw), NULL }, + { NULL, F(vldrh_sg_uh), F(vldrh_sg_uw), NULL }, + { NULL, NULL, F(vldrw_sg_uw), NULL }, + { NULL, NULL, NULL, F(vldrd_sg_ud) } + }, { + { NULL, NULL, NULL, NULL }, + { NULL, F(vldrh_sg_os_uh), F(vldrh_sg_os_uw), NULL }, + { NULL, NULL, F(vldrw_sg_os_uw), NULL }, + { NULL, NULL, NULL, F(vldrd_sg_os_ud) } + } + }; + if (a->qd == a->qm) { + return false; /* UNPREDICTABLE */ + } + return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]); +} + +static bool trans_VSTR_sg(DisasContext *s, arg_vldst_sg *a) +{ + static MVEGenLdStSGFn * const fns[2][4][4] = { { + { F(vstrb_sg_ub), F(vstrb_sg_uh), F(vstrb_sg_uw), NULL }, + { NULL, F(vstrh_sg_uh), F(vstrh_sg_uw), NULL }, + { NULL, NULL, F(vstrw_sg_uw), NULL }, + { NULL, NULL, NULL, F(vstrd_sg_ud) } + }, { + { NULL, NULL, NULL, NULL }, + { NULL, F(vstrh_sg_os_uh), F(vstrh_sg_os_uw), NULL }, + { NULL, NULL, F(vstrw_sg_os_uw), NULL }, + { NULL, NULL, NULL, F(vstrd_sg_os_ud) } + } + }; + return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]); +} + +#undef F + static bool trans_VDUP(DisasContext *s, arg_VDUP *a) { TCGv_ptr qd; From fac80f0856cc465b21e2e59a64146b3540e055db Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:57 +0100 Subject: [PATCH 34/44] target/arm: Implement MVE scatter-gather immediate forms Implement the MVE VLDR/VSTR insns which do scatter-gather using base addresses from Qm plus or minus an immediate offset (possibly with writeback). Note that writeback is not predicated but it does have to honour ECI state, so we have to add an eci_mask check to the VSTR_SG macros (the VLDR_SG macros already needed this to be able to distinguish "skip beat" from "set predicated element to 0"). Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 5 +++ target/arm/mve.decode | 10 +++++ target/arm/mve_helper.c | 91 ++++++++++++++++++++++++-------------- target/arm/translate-mve.c | 72 ++++++++++++++++++++++++++++++ 4 files changed, 146 insertions(+), 32 deletions(-) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index ba842b97c1..a85a7e1b75 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -65,6 +65,11 @@ DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vstrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vstrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vldrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_4(mve_vstrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) + DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32) DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 6c3f45c719..48882dd7f3 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -43,6 +43,7 @@ &vmaxv qm rda size &vabav qn qm rda size &vldst_sg qd qm rn size msize os +&vldst_sg_imm qd qm a w imm # scatter-gather memory size is in bits 6:4 %sg_msize 6:1 4:1 @@ -54,6 +55,10 @@ @vldst_sg .... .... .... rn:4 .... ... size:2 ... ... os:1 &vldst_sg \ qd=%qd qm=%qm msize=%sg_msize +# Qm is in the fields usually labeled Qn +@vldst_sg_imm .... .... a:1 . w:1 . .... .... .... . imm:7 &vldst_sg_imm \ + qd=%qd qm=%qn + @1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm @1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0 @2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn @@ -148,6 +153,11 @@ VLDR_S_sg 111 0 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg VLDR_U_sg 111 1 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg VSTR_sg 111 0 1100 1 . 00 .... ... 0 111 . .... .... @vldst_sg +VLDRW_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1110 .... .... @vldst_sg_imm +VLDRD_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1111 .... .... @vldst_sg_imm +VSTRW_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1110 .... .... @vldst_sg_imm +VSTRD_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1111 .... .... @vldst_sg_imm + # Moves between 2 32-bit vector lanes and 2 general purpose registers VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index 2b882db1c3..bbbaa53807 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -213,7 +213,7 @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t) * For loads, predicated lanes are zeroed instead of retaining * their previous values. */ -#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN) \ +#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN, WB) \ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \ uint32_t base) \ { \ @@ -230,25 +230,35 @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t) addr = ADDRFN(base, m[H##ESIZE(e)]); \ d[H##ESIZE(e)] = (mask & 1) ? \ cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \ + if (WB) { \ + m[H##ESIZE(e)] = addr; \ + } \ } \ mve_advance_vpt(env); \ } /* We know here TYPE is unsigned so always the same as the offset type */ -#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN) \ +#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN, WB) \ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \ uint32_t base) \ { \ TYPE *d = vd; \ TYPE *m = vm; \ uint16_t mask = mve_element_mask(env); \ + uint16_t eci_mask = mve_eci_mask(env); \ unsigned e; \ uint32_t addr; \ - for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \ + for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE, eci_mask >>= ESIZE) { \ + if (!(eci_mask & 1)) { \ + continue; \ + } \ addr = ADDRFN(base, m[H##ESIZE(e)]); \ if (mask & 1) { \ cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \ } \ + if (WB) { \ + m[H##ESIZE(e)] = addr; \ + } \ } \ mve_advance_vpt(env); \ } @@ -258,8 +268,10 @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t) * accesses, controlled by the predicate mask for the relevant beat, * and with a single 32-bit offset in the first of the two Qm elements. * Note that for QEMU our IMPDEF AIRCR.ENDIANNESS is always 0 (little). + * Address writeback happens on the odd beats and updates the address + * stored in the even-beat element. */ -#define DO_VLDR64_SG(OP, ADDRFN) \ +#define DO_VLDR64_SG(OP, ADDRFN, WB) \ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \ uint32_t base) \ { \ @@ -276,25 +288,35 @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t) addr = ADDRFN(base, m[H4(e & ~1)]); \ addr += 4 * (e & 1); \ d[H4(e)] = (mask & 1) ? cpu_ldl_data_ra(env, addr, GETPC()) : 0; \ + if (WB && (e & 1)) { \ + m[H4(e & ~1)] = addr - 4; \ + } \ } \ mve_advance_vpt(env); \ } -#define DO_VSTR64_SG(OP, ADDRFN) \ +#define DO_VSTR64_SG(OP, ADDRFN, WB) \ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \ uint32_t base) \ { \ uint32_t *d = vd; \ uint32_t *m = vm; \ uint16_t mask = mve_element_mask(env); \ + uint16_t eci_mask = mve_eci_mask(env); \ unsigned e; \ uint32_t addr; \ - for (e = 0; e < 16 / 4; e++, mask >>= 4) { \ + for (e = 0; e < 16 / 4; e++, mask >>= 4, eci_mask >>= 4) { \ + if (!(eci_mask & 1)) { \ + continue; \ + } \ addr = ADDRFN(base, m[H4(e & ~1)]); \ addr += 4 * (e & 1); \ if (mask & 1) { \ cpu_stl_data_ra(env, addr, d[H4(e)], GETPC()); \ } \ + if (WB && (e & 1)) { \ + m[H4(e & ~1)] = addr - 4; \ + } \ } \ mve_advance_vpt(env); \ } @@ -304,36 +326,41 @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t) #define ADDR_ADD_OSW(BASE, OFFSET) ((BASE) + ((OFFSET) << 2)) #define ADDR_ADD_OSD(BASE, OFFSET) ((BASE) + ((OFFSET) << 3)) -DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD) -DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD) -DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD) +DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD, false) +DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD, false) +DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD, false) -DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD) -DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD) -DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD) -DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD) -DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD) -DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD) -DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD) +DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD, false) +DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD, false) +DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD, false) +DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD, false) +DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD, false) +DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD, false) +DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD, false) -DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH) -DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH) -DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH) -DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW) -DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD) +DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH, false) +DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH, false) +DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH, false) +DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW, false) +DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD, false) -DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD) -DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD) -DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD) -DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD) -DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD) -DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD) -DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD) +DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD, false) +DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD, false) +DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD, false) +DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD, false) +DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD, false) +DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD, false) +DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD, false) -DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH) -DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH) -DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW) -DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD) +DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH, false) +DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH, false) +DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW, false) +DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD, false) + +DO_VLDR_SG(vldrw_sg_wb_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD, true) +DO_VLDR64_SG(vldrd_sg_wb_ud, ADDR_ADD, true) +DO_VSTR_SG(vstrw_sg_wb_uw, stl, 4, uint32_t, ADDR_ADD, true) +DO_VSTR64_SG(vstrd_sg_wb_ud, ADDR_ADD, true) /* * The mergemask(D, R, M) macro performs the operation "*D = R" but diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index 24d4e57ead..d3cb339686 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -306,6 +306,78 @@ static bool trans_VSTR_sg(DisasContext *s, arg_vldst_sg *a) #undef F +static bool do_ldst_sg_imm(DisasContext *s, arg_vldst_sg_imm *a, + MVEGenLdStSGFn *fn, unsigned msize) +{ + uint32_t offset; + TCGv_ptr qd, qm; + + if (!dc_isar_feature(aa32_mve, s) || + !mve_check_qreg_bank(s, a->qd | a->qm) || + !fn) { + return false; + } + + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + offset = a->imm << msize; + if (!a->a) { + offset = -offset; + } + + qd = mve_qreg_ptr(a->qd); + qm = mve_qreg_ptr(a->qm); + fn(cpu_env, qd, qm, tcg_constant_i32(offset)); + tcg_temp_free_ptr(qd); + tcg_temp_free_ptr(qm); + mve_update_eci(s); + return true; +} + +static bool trans_VLDRW_sg_imm(DisasContext *s, arg_vldst_sg_imm *a) +{ + static MVEGenLdStSGFn * const fns[] = { + gen_helper_mve_vldrw_sg_uw, + gen_helper_mve_vldrw_sg_wb_uw, + }; + if (a->qd == a->qm) { + return false; /* UNPREDICTABLE */ + } + return do_ldst_sg_imm(s, a, fns[a->w], MO_32); +} + +static bool trans_VLDRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a) +{ + static MVEGenLdStSGFn * const fns[] = { + gen_helper_mve_vldrd_sg_ud, + gen_helper_mve_vldrd_sg_wb_ud, + }; + if (a->qd == a->qm) { + return false; /* UNPREDICTABLE */ + } + return do_ldst_sg_imm(s, a, fns[a->w], MO_64); +} + +static bool trans_VSTRW_sg_imm(DisasContext *s, arg_vldst_sg_imm *a) +{ + static MVEGenLdStSGFn * const fns[] = { + gen_helper_mve_vstrw_sg_uw, + gen_helper_mve_vstrw_sg_wb_uw, + }; + return do_ldst_sg_imm(s, a, fns[a->w], MO_32); +} + +static bool trans_VSTRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a) +{ + static MVEGenLdStSGFn * const fns[] = { + gen_helper_mve_vstrd_sg_ud, + gen_helper_mve_vstrd_sg_wb_ud, + }; + return do_ldst_sg_imm(s, a, fns[a->w], MO_64); +} + static bool trans_VDUP(DisasContext *s, arg_VDUP *a) { TCGv_ptr qd; From 075e7e97e3a042854b8ea2827559891a577b4a6b Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 13 Aug 2021 17:11:57 +0100 Subject: [PATCH 35/44] target/arm: Implement MVE interleaving loads/stores Implement the MVE interleaving load/store functions VLD2, VLD4, VST2 and VST4. VLD2 loads 16 bytes of data from memory and writes to 2 consecutive Qregs; VLD4 loads 16 bytes of data from memory and writes to 4 consecutive Qregs. The 'pattern' field in the encoding determines the offset into memory which is accessed and also which elements in the Qregs are written to. (The intention is that a sequence of four consecutive VLD4 with different pattern values performs a complete de-interleaving load of 64 bytes into all elements of the 4 Qregs.) VST2 and VST4 do the same, but for stores. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper-mve.h | 48 ++++++ target/arm/mve.decode | 11 ++ target/arm/mve_helper.c | 342 +++++++++++++++++++++++++++++++++++++ target/arm/translate-mve.c | 94 ++++++++++ 4 files changed, 495 insertions(+) diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h index a85a7e1b75..3db9b15f12 100644 --- a/target/arm/helper-mve.h +++ b/target/arm/helper-mve.h @@ -70,6 +70,54 @@ DEF_HELPER_FLAGS_4(mve_vldrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vstrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) DEF_HELPER_FLAGS_4(mve_vstrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32) +DEF_HELPER_FLAGS_3(mve_vld20b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld20h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld20w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vld21b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld21h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld21w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vld40b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld40h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld40w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vld41b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld41h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld41w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vld42b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld42h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld42w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vld43b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld43h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vld43w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vst20b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst20h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst20w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vst21b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst21h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst21w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vst40b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst40h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst40w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vst41b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst41h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst41w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vst42b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst42h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst42w, TCG_CALL_NO_WG, void, env, i32, i32) + +DEF_HELPER_FLAGS_3(mve_vst43b, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst43h, TCG_CALL_NO_WG, void, env, i32, i32) +DEF_HELPER_FLAGS_3(mve_vst43w, TCG_CALL_NO_WG, void, env, i32, i32) + DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32) DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32) diff --git a/target/arm/mve.decode b/target/arm/mve.decode index 48882dd7f3..8744681629 100644 --- a/target/arm/mve.decode +++ b/target/arm/mve.decode @@ -44,6 +44,7 @@ &vabav qn qm rda size &vldst_sg qd qm rn size msize os &vldst_sg_imm qd qm a w imm +&vldst_il qd rn size pat w # scatter-gather memory size is in bits 6:4 %sg_msize 6:1 4:1 @@ -59,6 +60,10 @@ @vldst_sg_imm .... .... a:1 . w:1 . .... .... .... . imm:7 &vldst_sg_imm \ qd=%qd qm=%qn +# Deinterleaving load/interleaving store +@vldst_il .... .... .. w:1 . rn:4 .... ... size:2 pat:2 ..... &vldst_il \ + qd=%qd + @1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm @1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0 @2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn @@ -158,6 +163,12 @@ VLDRD_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1111 .... .... @vldst_sg_imm VSTRW_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1110 .... .... @vldst_sg_imm VSTRD_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1111 .... .... @vldst_sg_imm +# deinterleaving loads/interleaving stores +VLD2 1111 1100 1 .. 1 .... ... 1 111 .. .. 00000 @vldst_il +VLD4 1111 1100 1 .. 1 .... ... 1 111 .. .. 00001 @vldst_il +VST2 1111 1100 1 .. 0 .... ... 1 111 .. .. 00000 @vldst_il +VST4 1111 1100 1 .. 0 .... ... 1 111 .. .. 00001 @vldst_il + # Moves between 2 32-bit vector lanes and 2 general purpose registers VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c index bbbaa53807..c2826eb5f9 100644 --- a/target/arm/mve_helper.c +++ b/target/arm/mve_helper.c @@ -362,6 +362,348 @@ DO_VLDR64_SG(vldrd_sg_wb_ud, ADDR_ADD, true) DO_VSTR_SG(vstrw_sg_wb_uw, stl, 4, uint32_t, ADDR_ADD, true) DO_VSTR64_SG(vstrd_sg_wb_ud, ADDR_ADD, true) +/* + * Deinterleaving loads/interleaving stores. + * + * For these helpers we are passed the index of the first Qreg + * (VLD2/VST2 will also access Qn+1, VLD4/VST4 access Qn .. Qn+3) + * and the value of the base address register Rn. + * The helpers are specialized for pattern and element size, so + * for instance vld42h is VLD4 with pattern 2, element size MO_16. + * + * These insns are beatwise but not predicated, so we must honour ECI, + * but need not look at mve_element_mask(). + * + * The pseudocode implements these insns with multiple memory accesses + * of the element size, but rules R_VVVG and R_FXDM permit us to make + * one 32-bit memory access per beat. + */ +#define DO_VLD4B(OP, O1, O2, O3, O4) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat, e; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O2, O3, O4 }; \ + uint32_t addr, data; \ + for (beat = 0; beat < 4; beat++, mask >>= 4) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat] * 4; \ + data = cpu_ldl_le_data_ra(env, addr, GETPC()); \ + for (e = 0; e < 4; e++, data >>= 8) { \ + uint8_t *qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + e); \ + qd[H1(off[beat])] = data; \ + } \ + } \ + } + +#define DO_VLD4H(OP, O1, O2) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O1, O2, O2 }; \ + uint32_t addr, data; \ + int y; /* y counts 0 2 0 2 */ \ + uint16_t *qd; \ + for (beat = 0, y = 0; beat < 4; beat++, mask >>= 4, y ^= 2) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat] * 8 + (beat & 1) * 4; \ + data = cpu_ldl_le_data_ra(env, addr, GETPC()); \ + qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y); \ + qd[H2(off[beat])] = data; \ + data >>= 16; \ + qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y + 1); \ + qd[H2(off[beat])] = data; \ + } \ + } + +#define DO_VLD4W(OP, O1, O2, O3, O4) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O2, O3, O4 }; \ + uint32_t addr, data; \ + uint32_t *qd; \ + int y; \ + for (beat = 0; beat < 4; beat++, mask >>= 4) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat] * 4; \ + data = cpu_ldl_le_data_ra(env, addr, GETPC()); \ + y = (beat + (O1 & 2)) & 3; \ + qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + y); \ + qd[H4(off[beat] >> 2)] = data; \ + } \ + } + +DO_VLD4B(vld40b, 0, 1, 10, 11) +DO_VLD4B(vld41b, 2, 3, 12, 13) +DO_VLD4B(vld42b, 4, 5, 14, 15) +DO_VLD4B(vld43b, 6, 7, 8, 9) + +DO_VLD4H(vld40h, 0, 5) +DO_VLD4H(vld41h, 1, 6) +DO_VLD4H(vld42h, 2, 7) +DO_VLD4H(vld43h, 3, 4) + +DO_VLD4W(vld40w, 0, 1, 10, 11) +DO_VLD4W(vld41w, 2, 3, 12, 13) +DO_VLD4W(vld42w, 4, 5, 14, 15) +DO_VLD4W(vld43w, 6, 7, 8, 9) + +#define DO_VLD2B(OP, O1, O2, O3, O4) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat, e; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O2, O3, O4 }; \ + uint32_t addr, data; \ + uint8_t *qd; \ + for (beat = 0; beat < 4; beat++, mask >>= 4) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat] * 2; \ + data = cpu_ldl_le_data_ra(env, addr, GETPC()); \ + for (e = 0; e < 4; e++, data >>= 8) { \ + qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + (e & 1)); \ + qd[H1(off[beat] + (e >> 1))] = data; \ + } \ + } \ + } + +#define DO_VLD2H(OP, O1, O2, O3, O4) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O2, O3, O4 }; \ + uint32_t addr, data; \ + int e; \ + uint16_t *qd; \ + for (beat = 0; beat < 4; beat++, mask >>= 4) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat] * 4; \ + data = cpu_ldl_le_data_ra(env, addr, GETPC()); \ + for (e = 0; e < 2; e++, data >>= 16) { \ + qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + e); \ + qd[H2(off[beat])] = data; \ + } \ + } \ + } + +#define DO_VLD2W(OP, O1, O2, O3, O4) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O2, O3, O4 }; \ + uint32_t addr, data; \ + uint32_t *qd; \ + for (beat = 0; beat < 4; beat++, mask >>= 4) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat]; \ + data = cpu_ldl_le_data_ra(env, addr, GETPC()); \ + qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + (beat & 1)); \ + qd[H4(off[beat] >> 3)] = data; \ + } \ + } + +DO_VLD2B(vld20b, 0, 2, 12, 14) +DO_VLD2B(vld21b, 4, 6, 8, 10) + +DO_VLD2H(vld20h, 0, 1, 6, 7) +DO_VLD2H(vld21h, 2, 3, 4, 5) + +DO_VLD2W(vld20w, 0, 4, 24, 28) +DO_VLD2W(vld21w, 8, 12, 16, 20) + +#define DO_VST4B(OP, O1, O2, O3, O4) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat, e; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O2, O3, O4 }; \ + uint32_t addr, data; \ + for (beat = 0; beat < 4; beat++, mask >>= 4) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat] * 4; \ + data = 0; \ + for (e = 3; e >= 0; e--) { \ + uint8_t *qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + e); \ + data = (data << 8) | qd[H1(off[beat])]; \ + } \ + cpu_stl_le_data_ra(env, addr, data, GETPC()); \ + } \ + } + +#define DO_VST4H(OP, O1, O2) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O1, O2, O2 }; \ + uint32_t addr, data; \ + int y; /* y counts 0 2 0 2 */ \ + uint16_t *qd; \ + for (beat = 0, y = 0; beat < 4; beat++, mask >>= 4, y ^= 2) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat] * 8 + (beat & 1) * 4; \ + qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y); \ + data = qd[H2(off[beat])]; \ + qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y + 1); \ + data |= qd[H2(off[beat])] << 16; \ + cpu_stl_le_data_ra(env, addr, data, GETPC()); \ + } \ + } + +#define DO_VST4W(OP, O1, O2, O3, O4) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O2, O3, O4 }; \ + uint32_t addr, data; \ + uint32_t *qd; \ + int y; \ + for (beat = 0; beat < 4; beat++, mask >>= 4) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat] * 4; \ + y = (beat + (O1 & 2)) & 3; \ + qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + y); \ + data = qd[H4(off[beat] >> 2)]; \ + cpu_stl_le_data_ra(env, addr, data, GETPC()); \ + } \ + } + +DO_VST4B(vst40b, 0, 1, 10, 11) +DO_VST4B(vst41b, 2, 3, 12, 13) +DO_VST4B(vst42b, 4, 5, 14, 15) +DO_VST4B(vst43b, 6, 7, 8, 9) + +DO_VST4H(vst40h, 0, 5) +DO_VST4H(vst41h, 1, 6) +DO_VST4H(vst42h, 2, 7) +DO_VST4H(vst43h, 3, 4) + +DO_VST4W(vst40w, 0, 1, 10, 11) +DO_VST4W(vst41w, 2, 3, 12, 13) +DO_VST4W(vst42w, 4, 5, 14, 15) +DO_VST4W(vst43w, 6, 7, 8, 9) + +#define DO_VST2B(OP, O1, O2, O3, O4) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat, e; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O2, O3, O4 }; \ + uint32_t addr, data; \ + uint8_t *qd; \ + for (beat = 0; beat < 4; beat++, mask >>= 4) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat] * 2; \ + data = 0; \ + for (e = 3; e >= 0; e--) { \ + qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + (e & 1)); \ + data = (data << 8) | qd[H1(off[beat] + (e >> 1))]; \ + } \ + cpu_stl_le_data_ra(env, addr, data, GETPC()); \ + } \ + } + +#define DO_VST2H(OP, O1, O2, O3, O4) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O2, O3, O4 }; \ + uint32_t addr, data; \ + int e; \ + uint16_t *qd; \ + for (beat = 0; beat < 4; beat++, mask >>= 4) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat] * 4; \ + data = 0; \ + for (e = 1; e >= 0; e--) { \ + qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + e); \ + data = (data << 16) | qd[H2(off[beat])]; \ + } \ + cpu_stl_le_data_ra(env, addr, data, GETPC()); \ + } \ + } + +#define DO_VST2W(OP, O1, O2, O3, O4) \ + void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \ + uint32_t base) \ + { \ + int beat; \ + uint16_t mask = mve_eci_mask(env); \ + static const uint8_t off[4] = { O1, O2, O3, O4 }; \ + uint32_t addr, data; \ + uint32_t *qd; \ + for (beat = 0; beat < 4; beat++, mask >>= 4) { \ + if ((mask & 1) == 0) { \ + /* ECI says skip this beat */ \ + continue; \ + } \ + addr = base + off[beat]; \ + qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + (beat & 1)); \ + data = qd[H4(off[beat] >> 3)]; \ + cpu_stl_le_data_ra(env, addr, data, GETPC()); \ + } \ + } + +DO_VST2B(vst20b, 0, 2, 12, 14) +DO_VST2B(vst21b, 4, 6, 8, 10) + +DO_VST2H(vst20h, 0, 1, 6, 7) +DO_VST2H(vst21h, 2, 3, 4, 5) + +DO_VST2W(vst20w, 0, 4, 24, 28) +DO_VST2W(vst21w, 8, 12, 16, 20) + /* * The mergemask(D, R, M) macro performs the operation "*D = R" but * storing only the bytes which correspond to 1 bits in M, diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c index d3cb339686..78229c44c6 100644 --- a/target/arm/translate-mve.c +++ b/target/arm/translate-mve.c @@ -35,6 +35,7 @@ static inline int vidup_imm(DisasContext *s, int x) typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32); typedef void MVEGenLdStSGFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32); +typedef void MVEGenLdStIlFn(TCGv_ptr, TCGv_i32, TCGv_i32); typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr); typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr); typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32); @@ -378,6 +379,99 @@ static bool trans_VSTRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a) return do_ldst_sg_imm(s, a, fns[a->w], MO_64); } +static bool do_vldst_il(DisasContext *s, arg_vldst_il *a, MVEGenLdStIlFn *fn, + int addrinc) +{ + TCGv_i32 rn; + + if (!dc_isar_feature(aa32_mve, s) || + !mve_check_qreg_bank(s, a->qd) || + !fn || (a->rn == 13 && a->w) || a->rn == 15) { + /* Variously UNPREDICTABLE or UNDEF or related-encoding */ + return false; + } + if (!mve_eci_check(s) || !vfp_access_check(s)) { + return true; + } + + rn = load_reg(s, a->rn); + /* + * We pass the index of Qd, not a pointer, because the helper must + * access multiple Q registers starting at Qd and working up. + */ + fn(cpu_env, tcg_constant_i32(a->qd), rn); + + if (a->w) { + tcg_gen_addi_i32(rn, rn, addrinc); + store_reg(s, a->rn, rn); + } else { + tcg_temp_free_i32(rn); + } + mve_update_and_store_eci(s); + return true; +} + +/* This macro is just to make the arrays more compact in these functions */ +#define F(N) gen_helper_mve_##N + +static bool trans_VLD2(DisasContext *s, arg_vldst_il *a) +{ + static MVEGenLdStIlFn * const fns[4][4] = { + { F(vld20b), F(vld20h), F(vld20w), NULL, }, + { F(vld21b), F(vld21h), F(vld21w), NULL, }, + { NULL, NULL, NULL, NULL }, + { NULL, NULL, NULL, NULL }, + }; + if (a->qd > 6) { + return false; + } + return do_vldst_il(s, a, fns[a->pat][a->size], 32); +} + +static bool trans_VLD4(DisasContext *s, arg_vldst_il *a) +{ + static MVEGenLdStIlFn * const fns[4][4] = { + { F(vld40b), F(vld40h), F(vld40w), NULL, }, + { F(vld41b), F(vld41h), F(vld41w), NULL, }, + { F(vld42b), F(vld42h), F(vld42w), NULL, }, + { F(vld43b), F(vld43h), F(vld43w), NULL, }, + }; + if (a->qd > 4) { + return false; + } + return do_vldst_il(s, a, fns[a->pat][a->size], 64); +} + +static bool trans_VST2(DisasContext *s, arg_vldst_il *a) +{ + static MVEGenLdStIlFn * const fns[4][4] = { + { F(vst20b), F(vst20h), F(vst20w), NULL, }, + { F(vst21b), F(vst21h), F(vst21w), NULL, }, + { NULL, NULL, NULL, NULL }, + { NULL, NULL, NULL, NULL }, + }; + if (a->qd > 6) { + return false; + } + return do_vldst_il(s, a, fns[a->pat][a->size], 32); +} + +static bool trans_VST4(DisasContext *s, arg_vldst_il *a) +{ + static MVEGenLdStIlFn * const fns[4][4] = { + { F(vst40b), F(vst40h), F(vst40w), NULL, }, + { F(vst41b), F(vst41h), F(vst41w), NULL, }, + { F(vst42b), F(vst42h), F(vst42w), NULL, }, + { F(vst43b), F(vst43h), F(vst43w), NULL, }, + }; + if (a->qd > 4) { + return false; + } + return do_vldst_il(s, a, fns[a->pat][a->size], 64); +} + +#undef F + static bool trans_VDUP(DisasContext *s, arg_VDUP *a) { TCGv_ptr qd; From fc7a5038a6b70a3e474e58a5ed9845e4f5eed6dd Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 30 Jul 2021 16:16:35 +0100 Subject: [PATCH 36/44] target/arm: Re-indent sdiv and udiv helpers We're about to make a code change to the sdiv and udiv helper functions, so first fix their indentation and coding style. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson Message-id: 20210730151636.17254-2-peter.maydell@linaro.org --- target/arm/helper.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/target/arm/helper.c b/target/arm/helper.c index 155d8bf239..8e9c2a2cf8 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -9355,17 +9355,20 @@ uint32_t HELPER(uxtb16)(uint32_t x) int32_t HELPER(sdiv)(int32_t num, int32_t den) { - if (den == 0) - return 0; - if (num == INT_MIN && den == -1) - return INT_MIN; + if (den == 0) { + return 0; + } + if (num == INT_MIN && den == -1) { + return INT_MIN; + } return num / den; } uint32_t HELPER(udiv)(uint32_t num, uint32_t den) { - if (den == 0) - return 0; + if (den == 0) { + return 0; + } return num / den; } From e5346292966f5348cd36668f2451ca0e44d820b2 Mon Sep 17 00:00:00 2001 From: Peter Maydell Date: Fri, 30 Jul 2021 16:16:36 +0100 Subject: [PATCH 37/44] target/arm: Implement M-profile trapping on division by zero Unlike A-profile, for M-profile the UDIV and SDIV insns can be configured to raise an exception on division by zero, using the CCR DIV_0_TRP bit. Implement support for setting this bit by making the helper functions raise the appropriate exception. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson Message-id: 20210730151636.17254-3-peter.maydell@linaro.org --- target/arm/cpu.h | 1 + target/arm/helper.c | 19 +++++++++++++++++-- target/arm/helper.h | 4 ++-- target/arm/m_helper.c | 4 ++++ target/arm/translate.c | 4 ++-- 5 files changed, 26 insertions(+), 6 deletions(-) diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 9f0a5f84d5..5cf8996ae3 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -54,6 +54,7 @@ #define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */ #define EXCP_LSERR 21 /* v8M LSERR SecureFault */ #define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */ +#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */ /* NB: add new EXCP_ defines to the array in arm_log_exception() too */ #define ARMV7M_EXCP_RESET 1 diff --git a/target/arm/helper.c b/target/arm/helper.c index 8e9c2a2cf8..56c520cf8e 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -9345,6 +9345,18 @@ uint32_t HELPER(sxtb16)(uint32_t x) return res; } +static void handle_possible_div0_trap(CPUARMState *env, uintptr_t ra) +{ + /* + * Take a division-by-zero exception if necessary; otherwise return + * to get the usual non-trapping division behaviour (result of 0) + */ + if (arm_feature(env, ARM_FEATURE_M) + && (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_DIV_0_TRP_MASK)) { + raise_exception_ra(env, EXCP_DIVBYZERO, 0, 1, ra); + } +} + uint32_t HELPER(uxtb16)(uint32_t x) { uint32_t res; @@ -9353,9 +9365,10 @@ uint32_t HELPER(uxtb16)(uint32_t x) return res; } -int32_t HELPER(sdiv)(int32_t num, int32_t den) +int32_t HELPER(sdiv)(CPUARMState *env, int32_t num, int32_t den) { if (den == 0) { + handle_possible_div0_trap(env, GETPC()); return 0; } if (num == INT_MIN && den == -1) { @@ -9364,9 +9377,10 @@ int32_t HELPER(sdiv)(int32_t num, int32_t den) return num / den; } -uint32_t HELPER(udiv)(uint32_t num, uint32_t den) +uint32_t HELPER(udiv)(CPUARMState *env, uint32_t num, uint32_t den) { if (den == 0) { + handle_possible_div0_trap(env, GETPC()); return 0; } return num / den; @@ -9567,6 +9581,7 @@ void arm_log_exception(int idx) [EXCP_LAZYFP] = "v7M exception during lazy FP stacking", [EXCP_LSERR] = "v8M LSERR UsageFault", [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault", + [EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault", }; if (idx >= 0 && idx < ARRAY_SIZE(excnames)) { diff --git a/target/arm/helper.h b/target/arm/helper.h index 248569b0cd..aee8f0019b 100644 --- a/target/arm/helper.h +++ b/target/arm/helper.h @@ -6,8 +6,8 @@ DEF_HELPER_3(add_saturate, i32, env, i32, i32) DEF_HELPER_3(sub_saturate, i32, env, i32, i32) DEF_HELPER_3(add_usaturate, i32, env, i32, i32) DEF_HELPER_3(sub_usaturate, i32, env, i32, i32) -DEF_HELPER_FLAGS_2(sdiv, TCG_CALL_NO_RWG_SE, s32, s32, s32) -DEF_HELPER_FLAGS_2(udiv, TCG_CALL_NO_RWG_SE, i32, i32, i32) +DEF_HELPER_FLAGS_3(sdiv, TCG_CALL_NO_RWG, s32, env, s32, s32) +DEF_HELPER_FLAGS_3(udiv, TCG_CALL_NO_RWG, i32, env, i32, i32) DEF_HELPER_FLAGS_1(rbit, TCG_CALL_NO_RWG_SE, i32, i32) #define PAS_OP(pfx) \ diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c index 20761c9487..47903b3dc3 100644 --- a/target/arm/m_helper.c +++ b/target/arm/m_helper.c @@ -2252,6 +2252,10 @@ void arm_v7m_cpu_do_interrupt(CPUState *cs) armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK; break; + case EXCP_DIVBYZERO: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_DIVBYZERO_MASK; + break; case EXCP_SWI: /* The PC already points to the next instruction. */ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure); diff --git a/target/arm/translate.c b/target/arm/translate.c index 804a53279b..115aa768b6 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -7992,9 +7992,9 @@ static bool op_div(DisasContext *s, arg_rrr *a, bool u) t1 = load_reg(s, a->rn); t2 = load_reg(s, a->rm); if (u) { - gen_helper_udiv(t1, t1, t2); + gen_helper_udiv(t1, cpu_env, t1, t2); } else { - gen_helper_sdiv(t1, t1, t2); + gen_helper_sdiv(t1, cpu_env, t1, t2); } tcg_temp_free_i32(t2); store_reg(s, a->rd, t1); From dfa0d9b80ed36c3e3a92346c35e7e7b1e4afc49d Mon Sep 17 00:00:00 2001 From: Hamza Mahfooz Date: Tue, 27 Jul 2021 19:52:01 -0400 Subject: [PATCH 38/44] target/arm: kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route() As per commit 5626f8c6d468 ("rcu: Add automatically released rcu_read_lock variants"), RCU_READ_LOCK_GUARD() should be used instead of rcu_read_{un}lock(). Signed-off-by: Hamza Mahfooz Reviewed-by: Paolo Bonzini Message-id: 20210727235201.11491-1-someguy@effective-light.com Signed-off-by: Peter Maydell --- target/arm/kvm.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/target/arm/kvm.c b/target/arm/kvm.c index d8381ba224..5d55de1a49 100644 --- a/target/arm/kvm.c +++ b/target/arm/kvm.c @@ -998,7 +998,6 @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route, hwaddr xlat, len, doorbell_gpa; MemoryRegionSection mrs; MemoryRegion *mr; - int ret = 1; if (as == &address_space_memory) { return 0; @@ -1006,15 +1005,19 @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route, /* MSI doorbell address is translated by an IOMMU */ - rcu_read_lock(); + RCU_READ_LOCK_GUARD(); + mr = address_space_translate(as, address, &xlat, &len, true, MEMTXATTRS_UNSPECIFIED); + if (!mr) { - goto unlock; + return 1; } + mrs = memory_region_find(mr, xlat, 1); + if (!mrs.mr) { - goto unlock; + return 1; } doorbell_gpa = mrs.offset_within_address_space; @@ -1025,11 +1028,7 @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route, trace_kvm_arm_fixup_msi_route(address, doorbell_gpa); - ret = 0; - -unlock: - rcu_read_unlock(); - return ret; + return 0; } int kvm_arch_add_msi_route_post(struct kvm_irq_routing_entry *route, From d60af909d5131300596d4f6f052b2d0f44e76560 Mon Sep 17 00:00:00 2001 From: Jan Luebbe Date: Fri, 6 Aug 2021 16:47:00 +0200 Subject: [PATCH 39/44] hw/char/pl011: add support for sending break Break events are currently only handled by chardev/char-serial.c, so we just ignore errors, which results in no behaviour change for other chardevs. Signed-off-by: Jan Luebbe Message-id: 20210806144700.3751979-1-jlu@pengutronix.de Reviewed-by: Peter Maydell Signed-off-by: Peter Maydell --- hw/char/pl011.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/hw/char/pl011.c b/hw/char/pl011.c index dc85527a5f..6e2d7f7509 100644 --- a/hw/char/pl011.c +++ b/hw/char/pl011.c @@ -26,6 +26,7 @@ #include "hw/qdev-properties-system.h" #include "migration/vmstate.h" #include "chardev/char-fe.h" +#include "chardev/char-serial.h" #include "qemu/log.h" #include "qemu/module.h" #include "trace.h" @@ -231,6 +232,11 @@ static void pl011_write(void *opaque, hwaddr offset, s->read_count = 0; s->read_pos = 0; } + if ((s->lcr ^ value) & 0x1) { + int break_enable = value & 0x1; + qemu_chr_fe_ioctl(&s->chr, CHR_IOCTL_SERIAL_SET_BREAK, + &break_enable); + } s->lcr = value; pl011_set_read_trigger(s); break; From ff31cca71ef257f8da2d9bc647ddd53f080ce580 Mon Sep 17 00:00:00 2001 From: Guenter Roeck Date: Tue, 10 Aug 2021 09:03:18 -0700 Subject: [PATCH 40/44] fsl-imx6ul: Instantiate SAI1/2/3 and ASRC as unimplemented devices MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Instantiate SAI1/2/3 and ASRC as unimplemented devices to avoid random Linux kernel crashes, such as Unhandled fault: external abort on non-linefetch (0x808) at 0xd1580010 pgd = (ptrval) [d1580010] *pgd=8231b811, *pte=02034653, *ppte=02034453 Internal error: : 808 [#1] SMP ARM ... [] (regmap_mmio_write32le) from [] (regmap_mmio_write+0x3c/0x54) [] (regmap_mmio_write) from [] (_regmap_write+0x4c/0x1f0) [] (_regmap_write) from [] (_regmap_update_bits+0xe4/0xec) [] (_regmap_update_bits) from [] (regmap_update_bits_base+0x50/0x74) [] (regmap_update_bits_base) from [] (fsl_asrc_runtime_resume+0x1e4/0x21c) [] (fsl_asrc_runtime_resume) from [] (__rpm_callback+0x3c/0x108) [] (__rpm_callback) from [] (rpm_callback+0x60/0x64) [] (rpm_callback) from [] (rpm_resume+0x5cc/0x808) [] (rpm_resume) from [] (__pm_runtime_resume+0x60/0xa0) [] (__pm_runtime_resume) from [] (fsl_asrc_probe+0x2a8/0x708) [] (fsl_asrc_probe) from [] (platform_probe+0x58/0xb8) [] (platform_probe) from [] (really_probe.part.0+0x9c/0x334) [] (really_probe.part.0) from [] (__driver_probe_device+0xa0/0x138) [] (__driver_probe_device) from [] (driver_probe_device+0x30/0xc8) [] (driver_probe_device) from [] (__driver_attach+0x90/0x130) [] (__driver_attach) from [] (bus_for_each_dev+0x78/0xb8) [] (bus_for_each_dev) from [] (bus_add_driver+0xf0/0x1d8) [] (bus_add_driver) from [] (driver_register+0x88/0x118) [] (driver_register) from [] (do_one_initcall+0x7c/0x3a4) [] (do_one_initcall) from [] (kernel_init_freeable+0x198/0x22c) [] (kernel_init_freeable) from [] (kernel_init+0x10/0x128) [] (kernel_init) from [] (ret_from_fork+0x14/0x38) or Unhandled fault: external abort on non-linefetch (0x808) at 0xd19b0000 pgd = (ptrval) [d19b0000] *pgd=82711811, *pte=308a0653, *ppte=308a0453 Internal error: : 808 [#1] SMP ARM ... [] (regmap_mmio_write32le) from [] (regmap_mmio_write+0x3c/0x54) [] (regmap_mmio_write) from [] (_regmap_write+0x4c/0x1f0) [] (_regmap_write) from [] (regmap_write+0x3c/0x60) [] (regmap_write) from [] (fsl_sai_runtime_resume+0x9c/0x1ec) [] (fsl_sai_runtime_resume) from [] (__rpm_callback+0x3c/0x108) [] (__rpm_callback) from [] (rpm_callback+0x60/0x64) [] (rpm_callback) from [] (rpm_resume+0x5cc/0x808) [] (rpm_resume) from [] (__pm_runtime_resume+0x60/0xa0) [] (__pm_runtime_resume) from [] (fsl_sai_probe+0x2b8/0x65c) [] (fsl_sai_probe) from [] (platform_probe+0x58/0xb8) [] (platform_probe) from [] (really_probe.part.0+0x9c/0x334) [] (really_probe.part.0) from [] (__driver_probe_device+0xa0/0x138) [] (__driver_probe_device) from [] (driver_probe_device+0x30/0xc8) [] (driver_probe_device) from [] (__driver_attach+0x90/0x130) [] (__driver_attach) from [] (bus_for_each_dev+0x78/0xb8) [] (bus_for_each_dev) from [] (bus_add_driver+0xf0/0x1d8) [] (bus_add_driver) from [] (driver_register+0x88/0x118) [] (driver_register) from [] (do_one_initcall+0x7c/0x3a4) [] (do_one_initcall) from [] (kernel_init_freeable+0x198/0x22c) [] (kernel_init_freeable) from [] (kernel_init+0x10/0x128) [] (kernel_init) from [] (ret_from_fork+0x14/0x38) Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Guenter Roeck Message-id: 20210810160318.87376-1-linux@roeck-us.net Signed-off-by: Peter Maydell --- hw/arm/fsl-imx6ul.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c index e0128d7316..1d1a708dd9 100644 --- a/hw/arm/fsl-imx6ul.c +++ b/hw/arm/fsl-imx6ul.c @@ -534,6 +534,13 @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp) */ create_unimplemented_device("sdma", FSL_IMX6UL_SDMA_ADDR, 0x4000); + /* + * SAI (Audio SSI (Synchronous Serial Interface)) + */ + create_unimplemented_device("sai1", FSL_IMX6UL_SAI1_ADDR, 0x4000); + create_unimplemented_device("sai2", FSL_IMX6UL_SAI2_ADDR, 0x4000); + create_unimplemented_device("sai3", FSL_IMX6UL_SAI3_ADDR, 0x4000); + /* * PWM */ @@ -542,6 +549,11 @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp) create_unimplemented_device("pwm3", FSL_IMX6UL_PWM3_ADDR, 0x4000); create_unimplemented_device("pwm4", FSL_IMX6UL_PWM4_ADDR, 0x4000); + /* + * Audio ASRC (asynchronous sample rate converter) + */ + create_unimplemented_device("asrc", FSL_IMX6UL_ASRC_ADDR, 0x4000); + /* * CAN */ From 77844cc51aa0714d54ae6f5a12279ce0e7f5ef55 Mon Sep 17 00:00:00 2001 From: "Wen, Jianxian" Date: Wed, 18 Aug 2021 10:17:00 +0000 Subject: [PATCH 41/44] hw/dma/pl330: Add memory region to replace default MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add property memory region which can connect with IOMMU region to support SMMU translate. Signed-off-by: Jianxian Wen Reviewed-by: Philippe Mathieu-Daudé Message-id: 4C23C17B8E87E74E906A25A3254A03F4FA1FEC31@SHASXM03.verisilicon.com Signed-off-by: Peter Maydell --- hw/arm/exynos4210.c | 3 +++ hw/arm/xilinx_zynq.c | 3 +++ hw/dma/pl330.c | 26 ++++++++++++++++++++++---- 3 files changed, 28 insertions(+), 4 deletions(-) diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c index 5c7a51bbad..0299e81f85 100644 --- a/hw/arm/exynos4210.c +++ b/hw/arm/exynos4210.c @@ -173,6 +173,9 @@ static DeviceState *pl330_create(uint32_t base, qemu_or_irq *orgate, int i; dev = qdev_new("pl330"); + object_property_set_link(OBJECT(dev), "memory", + OBJECT(get_system_memory()), + &error_fatal); qdev_prop_set_uint8(dev, "num_events", nevents); qdev_prop_set_uint8(dev, "num_chnls", 8); qdev_prop_set_uint8(dev, "num_periph_req", nreq); diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c index 245af81bbb..69c333e91b 100644 --- a/hw/arm/xilinx_zynq.c +++ b/hw/arm/xilinx_zynq.c @@ -312,6 +312,9 @@ static void zynq_init(MachineState *machine) sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[39-IRQ_OFFSET]); dev = qdev_new("pl330"); + object_property_set_link(OBJECT(dev), "memory", + OBJECT(address_space_mem), + &error_fatal); qdev_prop_set_uint8(dev, "num_chnls", 8); qdev_prop_set_uint8(dev, "num_periph_req", 4); qdev_prop_set_uint8(dev, "num_events", 16); diff --git a/hw/dma/pl330.c b/hw/dma/pl330.c index 944ba296b0..0cb46191c1 100644 --- a/hw/dma/pl330.c +++ b/hw/dma/pl330.c @@ -269,6 +269,9 @@ struct PL330State { uint8_t num_faulting; uint8_t periph_busy[PL330_PERIPH_NUM]; + /* Memory region that DMA operation access */ + MemoryRegion *mem_mr; + AddressSpace *mem_as; }; #define TYPE_PL330 "pl330" @@ -1108,7 +1111,7 @@ static inline const PL330InsnDesc *pl330_fetch_insn(PL330Chan *ch) uint8_t opcode; int i; - dma_memory_read(&address_space_memory, ch->pc, &opcode, 1); + dma_memory_read(ch->parent->mem_as, ch->pc, &opcode, 1); for (i = 0; insn_desc[i].size; i++) { if ((opcode & insn_desc[i].opmask) == insn_desc[i].opcode) { return &insn_desc[i]; @@ -1122,7 +1125,7 @@ static inline void pl330_exec_insn(PL330Chan *ch, const PL330InsnDesc *insn) uint8_t buf[PL330_INSN_MAXSIZE]; assert(insn->size <= PL330_INSN_MAXSIZE); - dma_memory_read(&address_space_memory, ch->pc, buf, insn->size); + dma_memory_read(ch->parent->mem_as, ch->pc, buf, insn->size); insn->exec(ch, buf[0], &buf[1], insn->size - 1); } @@ -1186,7 +1189,7 @@ static int pl330_exec_cycle(PL330Chan *channel) if (q != NULL && q->len <= pl330_fifo_num_free(&s->fifo)) { int len = q->len - (q->addr & (q->len - 1)); - dma_memory_read(&address_space_memory, q->addr, buf, len); + dma_memory_read(s->mem_as, q->addr, buf, len); trace_pl330_exec_cycle(q->addr, len); if (trace_event_get_state_backends(TRACE_PL330_HEXDUMP)) { pl330_hexdump(buf, len); @@ -1217,7 +1220,7 @@ static int pl330_exec_cycle(PL330Chan *channel) fifo_res = pl330_fifo_get(&s->fifo, buf, len, q->tag); } if (fifo_res == PL330_FIFO_OK || q->z) { - dma_memory_write(&address_space_memory, q->addr, buf, len); + dma_memory_write(s->mem_as, q->addr, buf, len); trace_pl330_exec_cycle(q->addr, len); if (trace_event_get_state_backends(TRACE_PL330_HEXDUMP)) { pl330_hexdump(buf, len); @@ -1562,6 +1565,18 @@ static void pl330_realize(DeviceState *dev, Error **errp) "dma", PL330_IOMEM_SIZE); sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem); + if (!s->mem_mr) { + error_setg(errp, "'memory' link is not set"); + return; + } else if (s->mem_mr == get_system_memory()) { + /* Avoid creating new AS for system memory. */ + s->mem_as = &address_space_memory; + } else { + s->mem_as = g_new0(AddressSpace, 1); + address_space_init(s->mem_as, s->mem_mr, + memory_region_name(s->mem_mr)); + } + s->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, pl330_exec_cycle_timer, s); s->cfg[0] = (s->mgr_ns_at_rst ? 0x4 : 0) | @@ -1656,6 +1671,9 @@ static Property pl330_properties[] = { DEFINE_PROP_UINT8("rd_q_dep", PL330State, rd_q_dep, 16), DEFINE_PROP_UINT16("data_buffer_dep", PL330State, data_buffer_dep, 256), + DEFINE_PROP_LINK("memory", PL330State, mem_mr, + TYPE_MEMORY_REGION, MemoryRegion *), + DEFINE_PROP_END_OF_LIST(), }; From 80d60a6d1efebcf35ff96e2d0a51373b0383bc10 Mon Sep 17 00:00:00 2001 From: Eduardo Habkost Date: Thu, 5 Aug 2021 22:31:19 -0400 Subject: [PATCH 42/44] sbsa-ref: Rename SBSA_GWDT enum value The SBSA_GWDT enum value conflicts with the SBSA_GWDT() QOM type checking helper, preventing us from using a OBJECT_DEFINE* or DEFINE_INSTANCE_CHECKER macro for the SBSA_GWDT() wrapper. If I understand the SBSA 6.0 specification correctly, the signal being connected to IRQ 16 is the WS0 output signal from the Generic Watchdog. Rename the enum value to SBSA_GWDT_WS0 to be more explicit and avoid the name conflict. Signed-off-by: Eduardo Habkost Message-id: 20210806023119.431680-1-ehabkost@redhat.com Reviewed-by: Peter Maydell Signed-off-by: Peter Maydell --- hw/arm/sbsa-ref.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c index c1629df603..509c5f09b4 100644 --- a/hw/arm/sbsa-ref.c +++ b/hw/arm/sbsa-ref.c @@ -65,7 +65,7 @@ enum { SBSA_GIC_DIST, SBSA_GIC_REDIST, SBSA_SECURE_EC, - SBSA_GWDT, + SBSA_GWDT_WS0, SBSA_GWDT_REFRESH, SBSA_GWDT_CONTROL, SBSA_SMMU, @@ -140,7 +140,7 @@ static const int sbsa_ref_irqmap[] = { [SBSA_AHCI] = 10, [SBSA_EHCI] = 11, [SBSA_SMMU] = 12, /* ... to 15 */ - [SBSA_GWDT] = 16, + [SBSA_GWDT_WS0] = 16, }; static const char * const valid_cpus[] = { @@ -481,7 +481,7 @@ static void create_wdt(const SBSAMachineState *sms) hwaddr cbase = sbsa_ref_memmap[SBSA_GWDT_CONTROL].base; DeviceState *dev = qdev_new(TYPE_WDT_SBSA); SysBusDevice *s = SYS_BUS_DEVICE(dev); - int irq = sbsa_ref_irqmap[SBSA_GWDT]; + int irq = sbsa_ref_irqmap[SBSA_GWDT_WS0]; sysbus_realize_and_unref(s, &error_fatal); sysbus_mmio_map(s, 0, rbase); From 6f287c700c5fad924585dee1308477aa9e73ae50 Mon Sep 17 00:00:00 2001 From: Guenter Roeck Date: Tue, 10 Aug 2021 10:56:07 -0700 Subject: [PATCH 43/44] fsl-imx7: Instantiate SAI1/2/3 as unimplemented devices Instantiate SAI1/2/3 as unimplemented devices to avoid Linux kernel crashes such as the following. Unhandled fault: external abort on non-linefetch (0x808) at 0xd19b0000 pgd = (ptrval) [d19b0000] *pgd=82711811, *pte=308a0653, *ppte=308a0453 Internal error: : 808 [#1] SMP ARM Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-rc5 #1 ... [] (regmap_mmio_write32le) from [] (regmap_mmio_write+0x3c/0x54) [] (regmap_mmio_write) from [] (_regmap_write+0x4c/0x1f0) [] (_regmap_write) from [] (regmap_write+0x3c/0x60) [] (regmap_write) from [] (fsl_sai_runtime_resume+0x9c/0x1ec) [] (fsl_sai_runtime_resume) from [] (__rpm_callback+0x3c/0x108) [] (__rpm_callback) from [] (rpm_callback+0x60/0x64) [] (rpm_callback) from [] (rpm_resume+0x5cc/0x808) [] (rpm_resume) from [] (__pm_runtime_resume+0x60/0xa0) [] (__pm_runtime_resume) from [] (fsl_sai_probe+0x2b8/0x65c) [] (fsl_sai_probe) from [] (platform_probe+0x58/0xb8) [] (platform_probe) from [] (really_probe.part.0+0x9c/0x334) [] (really_probe.part.0) from [] (__driver_probe_device+0xa0/0x138) [] (__driver_probe_device) from [] (driver_probe_device+0x30/0xc8) [] (driver_probe_device) from [] (__driver_attach+0x90/0x130) [] (__driver_attach) from [] (bus_for_each_dev+0x78/0xb8) [] (bus_for_each_dev) from [] (bus_add_driver+0xf0/0x1d8) [] (bus_add_driver) from [] (driver_register+0x88/0x118) [] (driver_register) from [] (do_one_initcall+0x7c/0x3a4) [] (do_one_initcall) from [] (kernel_init_freeable+0x198/0x22c) [] (kernel_init_freeable) from [] (kernel_init+0x10/0x128) [] (kernel_init) from [] (ret_from_fork+0x14/0x38) Signed-off-by: Guenter Roeck Message-id: 20210810175607.538090-1-linux@roeck-us.net Reviewed-by: Peter Maydell Signed-off-by: Peter Maydell --- hw/arm/fsl-imx7.c | 7 +++++++ include/hw/arm/fsl-imx7.h | 5 +++++ 2 files changed, 12 insertions(+) diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c index 2ff2cab924..149885f2b8 100644 --- a/hw/arm/fsl-imx7.c +++ b/hw/arm/fsl-imx7.c @@ -467,6 +467,13 @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp) create_unimplemented_device("can1", FSL_IMX7_CAN1_ADDR, FSL_IMX7_CANn_SIZE); create_unimplemented_device("can2", FSL_IMX7_CAN2_ADDR, FSL_IMX7_CANn_SIZE); + /* + * SAI (Audio SSI (Synchronous Serial Interface)) + */ + create_unimplemented_device("sai1", FSL_IMX7_SAI1_ADDR, FSL_IMX7_SAIn_SIZE); + create_unimplemented_device("sai2", FSL_IMX7_SAI2_ADDR, FSL_IMX7_SAIn_SIZE); + create_unimplemented_device("sai2", FSL_IMX7_SAI3_ADDR, FSL_IMX7_SAIn_SIZE); + /* * OCOTP */ diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h index f5d527a490..1c5fa6fd67 100644 --- a/include/hw/arm/fsl-imx7.h +++ b/include/hw/arm/fsl-imx7.h @@ -174,6 +174,11 @@ enum FslIMX7MemoryMap { FSL_IMX7_UART6_ADDR = 0x30A80000, FSL_IMX7_UART7_ADDR = 0x30A90000, + FSL_IMX7_SAI1_ADDR = 0x308A0000, + FSL_IMX7_SAI2_ADDR = 0x308B0000, + FSL_IMX7_SAI3_ADDR = 0x308C0000, + FSL_IMX7_SAIn_SIZE = 0x10000, + FSL_IMX7_ENET1_ADDR = 0x30BE0000, FSL_IMX7_ENET2_ADDR = 0x30BF0000, From 24b1a6aa43615be22c7ee66bd68ec5675f6a6a9a Mon Sep 17 00:00:00 2001 From: Sebastian Meyer Date: Tue, 10 Aug 2021 18:04:36 +0200 Subject: [PATCH 44/44] docs: Document how to use gdb with unix sockets MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit With gdb 9.0 and better it is possible to connect to a gdbstub over unix sockets, which is better than a TCP socket connection in some situations. The QEMU command line to set this up is non-obvious; document it. Signed-off-by: Sebastian Meyer Message-id: 162867284829.27377.4784930719350564918-0@git.sr.ht [PMM: Tweaked commit message; adjusted wording in a couple of places; fixed rST formatting issue; moved section up out of the 'advanced debugging options' subsection] Reviewed-by: Peter Maydell Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Peter Maydell --- docs/system/gdb.rst | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/docs/system/gdb.rst b/docs/system/gdb.rst index 144d083df3..bdb42dae2f 100644 --- a/docs/system/gdb.rst +++ b/docs/system/gdb.rst @@ -15,7 +15,8 @@ The ``-s`` option will make QEMU listen for an incoming connection from gdb on TCP port 1234, and ``-S`` will make QEMU not start the guest until you tell it to from gdb. (If you want to specify which TCP port to use or to use something other than TCP for the gdbstub -connection, use the ``-gdb dev`` option instead of ``-s``.) +connection, use the ``-gdb dev`` option instead of ``-s``. See +`Using unix sockets`_ for an example.) .. parsed-literal:: @@ -100,6 +101,29 @@ not just those in the cluster you are currently working on:: (gdb) set schedule-multiple on +Using unix sockets +================== + +An alternate method for connecting gdb to the QEMU gdbstub is to use +a unix socket (if supported by your operating system). This is useful when +running several tests in parallel, or if you do not have a known free TCP +port (e.g. when running automated tests). + +First create a chardev with the appropriate options, then +instruct the gdbserver to use that device: + +.. parsed-literal:: + + |qemu_system| -chardev socket,path=/tmp/gdb-socket,server=on,wait=off,id=gdb0 -gdb chardev:gdb0 -S ... + +Start gdb as before, but this time connect using the path to +the socket:: + + (gdb) target remote /tmp/gdb-socket + +Note that to use a unix socket for the connection you will need +gdb version 9.0 or newer. + Advanced debugging options ==========================