This changes code that previously looked at the primary opcode (bits
26 to 31) of the instruction to use other methods, in places other
than in stage0 of decode1.
* Extend rc_t to have a new value, RCOE, indicating that the
instruction has both Rc and OE bits.
* Decode2 now tells execute1 whether the instruction has a third
operand, used for distinguishing between multiply and multiply-add
instructions.
* The invert_a field of the decode ROM is overloaded for load/store
instructions to indicate cache-inhibited loads and stores.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This lets us compute r_out.reg_*_addr and r_out.read_2_enable values
without needing access to the primary opcode value. We also have that
non-FP instructions are < 256.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This reduces the block RAM requirements for instruction decoding by
splitting it into two steps. The first, in a new pipeline stage
called decode0 (implemented by code in decode1.vhdl) maps the
instruction to a 9-bit instruction code using major and row decode
ROMs. The second maps the 9-bit code to the final decode_rom_t (about
44 bits wide). Branch prediction done in decode is now done in
decode0 rather than decode1.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This combines the various decode arrays in decode1 into two, one
indexed by the major opcode (bits 31--26 of the instruction) together
with bits 4--0 of the instruction, and the other indexed mostly by the
minor opcode (bits 10--1), with some swizzles to accommodate the
relevant parts of the minor opcode space for opcodes 19, 31, 59 and 63
within a 2k entry ROM (11 address bits). These are called the "major"
and the "row" decode ROMs respectively. (Bits 10--6 of the
instruction are called the "row index", and bits 5--1, or 5--0 for
some opcodes, are called the "column index", because of the way the
opcode maps in the ISA are laid out.)
Both ROMs are looked up each cycle and the result from one or other,
or from an override in ri.override_decode, are selected after a clock
edge.
This uses quite a lot of BRAM resources. In future a predecode step
will reduce the BRAM usage substantially.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Instead of doing that in decode1. That lets us get rid of the
force_single and override_unit fields of reg_internal_t in decode1,
which will simplify following changes to decode1.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
When a floating-point subtraction results in a zero result, the sign
of the result is required to be positive in all rounding modes except
the round to minus infinity mode, when it is negative. Consolidate
the logic for doing this in one place, in the pack_dp function,
instead of having it at each place where a zero result is generated.
Since fnmadd[s] and fnmsub[s] negate the result after this rule has
been applied, we use the r.negate signal to indicate a negation which
is now done in pack_dp. Thus the EXC_RESULT state no longer uses
r.negate, and in fact doesn't set v.result_sign at all; that is now
done in the states that lead into EXC_RESULT.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Do more decoding of the instruction ahead of the IDLE state
processing so that the IDLE state code becomes much simpler.
To make the decoding easier, we now use four insn_type_t codes for
floating-point operations rather than two. This also rearranges the
insn_type_t values a little to get the 4 FP opcode values to differ
only in the bottom 2 bits, and put OP_DIV, OP_DIVE and OP_MOD next to
them.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
With this, the large case statement sets values for a set of control
signals, which then control multiplexers and adders that generate
values for v.result_exp and v.shift. The plan is for the case
statement to turn into a microcode ROM eventually.
The value of v.result_exp is the sum of two values, either of which
can be negated (but not both). The first value can be chosen from the
result exponent, A exponent, B exponent arithmetically shifted right
one bit, or 0. The second value can be chosen from new_exp (which is
r.result_exp - r.shift), B exponent, C exponent or a constant. The
choices for the constant are 0, 56, the maximum exponent (max_exp) or
the exponent bias for trap-enabled overflow conditions (bias_exp).
These choices are controlled by the signals re_sel1, re_neg1, re_sel2
and re_neg2, and the sum is written into v.result_exp if re_set_result
is 1.
For v.shift we also compute the sum of two values, either of which
can be negated (but not both). The first value can be chosen from
new_exp, B exponent, r.shift, or 0. The second value can be chosen
from the A exponent or a constant. The possible constants are 0, 1,
4, 8, 32, 52, 56, 63, 64, or the minimum exponent (min_exp). These
choices are controlled by the signals rs_sel1, rs_neg1, rs_sel2 and
rs_neg2. After the adder there is a multiplexer which selects either
the sum or a shift count for normalization (derived from a count
leading zeroes operation on R) to be written into v.shift. The
count-leading-zeroes result does not go through the adder for timing
reasons.
In order to simplify the logic and help improve timing, settings of
the control signals have been made unconditional in a state in many
places, even if those settings are only required when some condition
is met.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
In preparation for an explicit exponent data path. The fix is that
fre[s] needs to negate the exponent after renomalization rather than
before, otherwise the exponent adjustment done by the renormalization
is in the wrong direction.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Instead of having a multiplexer in loadstore1 in order to be able to
put the instruction address into v.addr, we now set decode.input_reg_a
to CIA in the decode table entry for OP_FETCH_FAILED. That means that
the operand selection machinery in decode2 will supply the instruction
address to loadstore1 on the lv.addr1 input and no special case is
needed in loadstore1. This saves a few LUTs (~40 on the Artix-7).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This removes some logic that was previously added for the 16-byte
loads and stores (lq, lqarx, stq, stqcx.) and not completely removed
in commit c9e838b656 ("Remove support for lq, stq, lqarx and
stqcx.", 2022-06-04).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
By not assigning to interrupt_out.srr1 in some circumstances, the
writeback_1 process creates an inferred latch, which is not
desirable. Eliminate it by restructuring the code so
interrupt_out.srr1 is always set, to zeroes if nothing else.
Fixes: bc4d02cb0d ("Start removing SPRs from register file", 2022-07-12)
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This fixes a bug which causes a trace interrupt to store the wrong
value in SRR0 in the case where the instruction that has just
completed is followed by a sc (system call) instruction. What happens
is that first the traced instruction sets ex1.trace_next. Then, when
the sc instruction following it comes in, the execute1_actions process
sets v.e.last_nia to next_nia because it is an sc instruction, even
though it is not going to be executed -- we are going to take the
trace interrupt instead. Then when the trace interrupt is taken, we
incorrectly set SRR0 to the incremented address (the address of the
instruction following the sc).
To fix this, we have execute1_actions set a new flag if the current
instruction is sc, and only set v.e.last_nia to next_nia if we
actually execute the sc (in the "if go = '1'" case).
Fixes: 813e2317bf ("execute1: Restructure to separate out execution of side effects", 2022-06-18)
Reported-by: Anton Blanchard <anton@linux.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
xics.vhdl:83:25⚠️ declaration of "v" hides variable "v" [-Whide]
function bswap(v : in std_ulogic_vector(31 downto 0)) return std_ulogic_vector is
xics.vhdl:84:22⚠️ declaration of "r" hides signal "r" [-Whide]
variable r : std_ulogic_vector(31 downto 0);
Signed-off-by: Joel Stanley <joel@jms.id.au>
fpu.vhdl:513:18⚠️ declaration of "result" hides signal "result" [-Whide]
variable result : std_ulogic_vector(63 downto 0);
Signed-off-by: Joel Stanley <joel@jms.id.au>
Regenerate from upstream litex. Something in the update has improved
memory read and write performance quite a lot on my Nexys Video:
Before:
Write speed: 83.2MiB/s
Read speed: 140.4MiB/s
After:
Write speed: 352.1MiB/s
Read speed: 218.5MiB/s
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Fix the litex generate script to pass frequencies in Hz. Regenerate
the litesdcard Verilog for both Xilinx and Lattice. This fixes
litesdcard on my Nexys Video.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Building the mw_debug program leaves build artifacts in
microwatt/scripts/mw_debug
causing noise in the output of `git status`.
This commit adds them to .gitignore.
Signed-off-by: Boris Shingarov <shingarov@labware.com>
The following commit added two tests but didn't update the tests
outputs:
commit 73cc5167ec
Author: Paul Mackerras <paulus@ozlabs.org>
Date: Mon May 9 19:18:42 2022 +1000
Use FPU for division instructions if we have an FPU
This patch updates these using tests/update_console_tests
Signed-off-by: Michael Neuling <mikey@neuling.org>
This improves timing a little because the register addresses now come
directly from a latch instead of being calculated by
decode_input_reg_*. The asserts that check that the two are the same
are now in decode2 rather than register_file.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
With this, the register RAM is read synchronously using the addresses
supplied by decode1. That means the register RAM can now be block RAM
rather than LUT RAM.
Debug accesses are done via the B port on cycles when decode1
indicates that there is no valid instruction or the instruction
doesn't use a [F]RB operand.
We latch the addresses being read in each cycle and use the same
address next cycle if stalled. Data that is being written is latched
and a multiplexer on each read port then supplies the latched write
data if the read address for that port equals the write address.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds some relatively simple logic to decode1 to compute the
GPR/FPR addresses that an instruction will access. It always computes
three addresses regardless of whether the instruction will actually
use all of them. The main things it computes are whether the
instruction uses the RS field or the RC field for the 3rd operand, and
whether the operands are FPRs or GPRs (it is possible for RS to be an
FPR but RA and RB to be GPRs, as for example with stfdx).
At the moment all we do with these computed register addresses is to
assert that they are identical to the ones coming from decode2 one
cycle later.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This eliminates one leg of the output value multiplexer, and seems
to improve timing slightly on the A7-100.
Since SPR values are written in stage 3 and read in stage 2, an mfspr
immediately following an mtspr to the same SPR won't give the correct
value. To avoid this, we make mtspr to the load/store SPRs single
issue in decode1.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>