The ability to stop the core using the debug interface has been broken
since commit bb4332b7e6b5 ("Remove fetch2 pipeline stage"), which
removed a statement that cleared the valid bit on instructions when
their stop_mark was 1.
Fix this by clearing r.req coming out of fetch1 when r.stop_mark = 1.
This has the effect of making i_out.valid be 0 from the icache. We
also fix a bug in icache.vhdl where it was not honouring i_in.req when
use_previous = 1.
It turns out that the logic in fetch1.vhdl to handle stopping and
restarting was not correct, with the effect that stopping the core
would leave NIA pointing to the last instruction executed, not the
next instruction to be executed. In fact the state machine is
unnecessary and the whole thing can be simplified enormously - we
need to increment NIA whenever stop_in = 0 in the previous cycle.
Fixes: bb4332b7e6b5 ("Remove fetch2 pipeline stage")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
The tools complain about uart1_irq not being driven and not having a
default when HAS_UART1 is false. This sets it to 0 in that case.
Fixes: 7575b1e0c2 ("uart: Import and hook up opencore 16550 compatible UART")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
If BOOT_CLOCKS is false we currently get stuck in the flash
state machine. This patch from Ben fixes it.
Also fix an x state issue I see in icarus verilog where we need
to reset auto_state.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Some of the bits in the FPU buses end up as z state. Yosys
flags them, so we may as well clean it up.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Our flash controller fails when simulating with iverilog. Looking
closer, both wb_stash and auto_last_addr are X state, and things
fall apart after they get used.
Initialise them both fixes the iverilog issue.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Our Makefiles need some work, but for now create an FPGA target:
make FPGA_TARGET=verilator microwatt-verilator
ghdl and yosys can use containers using PODMAN=1 or DOCKER=1
options.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
yosys and verilator did not like us passing in the verilog and
exporting it again. Pass the source directly to verilator instead.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
When building with yosys we assume hello_world fits in 8kB. There's
enough free space that we can adjust the linker script to make it fit.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
This adds a test with a bdnzl followed immediately by a bdnz, to check
that CTR and LR both get evaluated and written back correctly in this
situation.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Random execution testcases showed that a bdnzl which doesn't branch,
followed immediately by a bdnz, uses the wrong value for CTR for the
bdnz. Decode2 detects the read-after-write hazard on CTR and tells
execute1 to use the bypass path. However, the bdnzl takes two cycles
because it has to write back both CTR and LR, meaning that by the time
the bdnz starts to execute, r.e.write_data no longer contains the CTR
value, but instead contains zero.
To fix this, we make execute1 maintain the written-back value of CTR
in r.e.write_data across the cycle where LR is written back (this is
possible because the LR writeback uses the exc_write_data path).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Branch instructions which do a redirect and write both CTR and LR were
not doing the write to LR due to a logic error. This fixes it.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
If an instruction fetch results in an instruction TLB miss, an
OP_FETCH_FAILED instruction is sent down the pipe. If the MSR[TE]
field is set for instruction tracing, the core currently considers
that executing the OP_FETCH_FAILED counts as having executed one
instruction and so generates a trace interrupt on the next valid
instruction, meaning that the trace interrupt happens before the
desired instruction rather than after it.
Fix this by not tracing OP_FETCH_FAILED instructions.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
The masking enabled by opsel_amask is only used when rounding, to trim
the rounded result to the required precision. We now do the masking
after the adder rather than before (on the A input). This gives the
same result and helps timing. The path from r.shift through the mask
generator and adder to v.r was showing up as a critical path.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This moves longmask into the reg_type record, meaning that it now
needs to be decided a cycle earlier, in order to help timing.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This moves opsel_a into the reg_type record, meaning that the A
multiplexer input now needs to be decided a cycle earlier. This helps
timing by eliminating the combinatorial path from r.state and other
things to opsel_a and thence to in_a and result.
This means that some things now take an extra cycle, in particular
some of the exception cases such as when one or both operands are
NaNs. The NaN handling has been moved out to its own state, which
simplifies the logic for exception cases in other places. Additions
or subtractions where FRB's exponent is smaller than FRA's will
also take an extra cycle.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This implements fmadd, fmsub, fnmadd, fnmsub and their
single-precision counterparts. The single-precision versions operate
the same as the double-precision versions until the final rounding and
overflow/underflow steps.
This adds an S register to store the low bits of the product. S
shifts into R on left shifts, and can be negated, but doesn't do any
other arithmetic.
This adds a test for the double-precision versions of these
instructions.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This implements the floating square-root calculation using a table
lookup of the inverse square root approximation, followed by three
iterations of Goldschmidt's algorithm, which gives estimates of both
sqrt(FRB) and 1/sqrt(FRB). Then the residual is calculated as
FRB - R * R and that is multiplied by the 1/sqrt(FRB) estimate to get
an adjustment to R. The residual and the adjustment can be negative,
and since we have an unsigned multiplier, the upper bits can be wrong.
In practice the adjustment fits into an 8-bit signed value, and the
bottom 8 bits of the adjustment product are correct, so we sign-extend
them, divide by 4 (because R is in 10.54 format) and add them to R.
Finally the residual is calculated again and compared to 2*R+1 to see
if a final increment is needed. Then the result is rounded and
written back.
This implements fsqrts as fsqrt, but with rounding to single precision
and underflow/overflow calculation using the single-precision exponent
range. This could be optimized later.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This implements frsqrte by table lookup. We first normalize the input
if necessary and adjust so that the exponent is even, giving us a
mantissa value in the range [1.0, 4.0), which is then used to look up
an entry in a 768-entry table. The 768 entries are appended to the
table for reciprocal estimates, giving a table of 1024 entries in
total. frsqrtes is implemented identically to frsqrte.
The estimate supplied is accurate to 1 part in 1024 or better.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This just returns the value from the inverse lookup table. The result
is accurate to better than one part in 512 (the architecture requires
1/256).
This also adds a simple test, which relies on the particular values in
the inverse lookup table, so it is not a general test.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This implements floating-point division A/B by a process that starts
with normalizing both inputs if necessary. Then an estimate of 1/B
from a lookup table is refined by 3 Newton-Raphson iterations and then
multiplied by A to get a quotient. The remainder is calculated as
A - R * B (where R is the result, i.e. the quotient) and the remainder
is compared to 0 and to B to see whether the quotient needs to be
incremented by 1. The calculations of 1 / B are done with 56 fraction
bits and intermediate results are truncated rather than rounded,
meaning that the final estimate of 1 / B is always correct or a little
bit low, never too high, and thus the calculated quotient is correct
or 1 unit too low. Doing the estimate of 1 / B with sufficient
precision that the quotient is always correct to the last bit without
needing any adjustment would require many more bits of precision.
This implements fdivs by computing a double-precision quotient and
then rounding it to single precision. It would be possible to
optimize this by e.g. doing only 2 iterations of Newton-Raphson and
then doing the remainder calculation and adjustment at single
precision rather than double precision.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This implements the fmul and fmuls instructions.
For fmul[s] with denormalized operands we normalize the inputs
before doing the multiplication, to eliminate the need for doing
count-leading-zeroes on P. This adds 3 or 5 cycles to the
execution time when one or both operands are denormalized.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This implements fctiw, fctiwz, fctiwu, fctiwuz, fctid, fctidz, fctidu
and fctiduz, and adds tests for them.
There are some subtleties around the setting of the inexact (XX) and
invalid conversion (VXCVI) flags in the FPSCR. If the rounded value
ends up being out of range, we need to set VXCVI and not XX. For a
conversion to unsigned word or doubleword of a negative value that
rounds to zero, we need to set XX and not VXCVI.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>