The existing orange crab target is for an older board with a
LFE5UM5G-85F device. Newer orange crab boards (v0.21) have a
LFE5U-85F device in the -8 speed grade, so make a new target for them
called ORANGE-CRAB-0.21.
Also add flags to ecppack to indicate that the bitstream should be
compressed and can be loaded at 38.8MHz.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
These convert addresses to/from wishbone addresses, and use them
in parts of the caches, in order to make the code a bit more readable.
Along the way, rename some functions in the caches to make it a bit
clearer what they operate on and fix a bug in the icache STOP_RELOAD state where
the wb address wasn't properly converted.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This moves REAL_ADDR_BITS out of the caches and defines a real_addr_t
type for a real address, along with a addr_to_real() conversion helper.
It makes the vhdl a bit more readable
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
We have a bug where an store near a dcbz can cause the dcbz to only zero
8 bytes. Add a test case for this.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Note: There are a few patches to upstream to fix an upstream breakage
of litedram standalone generator, and fix some issues with liteeth
in the way it's used on Wukong. All these have pending pull requests.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
For now only the V2 of the board (slightly different pinout)
and only the A100T variant. I also haven't added GPIOs or anything
else on the PMODs really.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This fixes a bug where a dcbz can get incorrectly handled as an
ordinary 8-byte store if it arrives while the dcache state machine is
handling other stores with the same tag value (i.e. within the same
set-sized area of memory). The logic that says whether to include a
new store in the current wishbone cycle didn't take into account
whether the new store was a dcbz. This adds a "req.dcbz = '0'" factor
so that it does. This is necessary because dcbz is handled more like
a cache line refill (but writing to memory rather than reading) than
an ordinary store.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This fixes two bugs in the flash invalidation of the icache.
The first is that an instruction could get executed twice. The
i-cache RAM is 2 instructions (64 bits) wide, so one read can supply
results for 2 cycles. The fetch1 stage tells icache when the address
is equal to the address of the previous cycle plus 4, and in cases
where that is true, bit 2 of the address is 1, and the previous cycle
was a cache hit, we just use the second word of the doubleword read
from the cache RAM. However, the cache hit/miss logic also continues
to operate, so in the case where the first word hits but the second
word misses (because of an icache invalidation or a snoop occurring in
the first cycle), we supply the instruction from the data previously
read from the icache RAM but also stall fetch1 and start a cache
reload sequence, and subsequently supply the second instruction
again. This fixes the issue by inhibiting req_is_miss and stall_out
when use_previous is true.
The second bug is that if an icache invalidation occurs while
reloading a line, we continue to reload the line, and make it valid
when the reload finishes, even though some of the data may have been
read before the invalidation occurred. This adds a new state
STOP_RELOAD which we go to if an invalidation happens while we are in
CLR_TAG or WAIT_ACK state. In STOP_RELOAD state we don't request any
more reads from memory and wait for the reads we have previously
requested to be acked, and then go to IDLE state. Data returned is
still written to the icache RAM, but that doesn't matter because the
line is invalid and is never made valid.
Note that we don't have to worry about invalidations due to snooped
writes while reloading a line, because the wishbone arbiter won't
switch to another master once it has started sending our reload
requests to memory. Thus a store to memory will either happen before
any of our reads have got to memory, or after we have finished the
reload (in which case we will no longer be in WAIT_ACK state).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
I'm not sure why I set the input frequency for the Orange Crab to 50MHz.
Since we easily make timing now, bump our output frequency to 48MHz as
well.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
This makes the 64-bit wishbone buses have the address expressed in
units of doublewords (64 bits), and similarly for the 32-bit buses the
address is in units of words (32 bits). This is to comply with the
wishbone spec. Previously the addresses on the wishbone buses were in
units of bytes regardless of the bus data width, which is not correct
and caused problems with interfacing with externally-generated logic.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds an optional 16 bit x 16 bit signed multiplier and uses it
for multiply instructions that return the low 64 bits of the product
(mull[dw][o] and mulli, but not maddld) when the operands are both in
the range -2^15 .. 2^15 - 1. The "short" 16-bit multiplier produces
its result combinatorially, so a multiply that uses it executes in one
cycle. This improves the coremark result by about 4%, since coremark
does quite a lot of multiplies and they almost all have operands that
fit into 16 bits.
The presence of the short multiplier is controlled by a generic at the
execute1, SOC, core and top levels. For now, it defaults to off for
all platforms, and can be enabled using the --has_short_mult flag to
fusesoc.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Some critical path reports showed r1.req.addr depending on l_in.valid,
which then depended ultimately on the dcache's r1.ls_valid. In fact
we can update r1.req.addr (and other fields of r1.req, except for
r1.req.valid) independently of l_in.valid as long as busy = 0.
We do also need to preserve r1.req.addr0 when l_in.valid = 0, so we
pull it out of r1.req and store it separately in r1.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds a bit to the BTC to store whether the corresponding branch
instruction was taken last time it was encountered. That lets us pass
a not-taken prediction down to decode1, which for backwards direct
branches inhibits it from redirecting fetch to the target of the
branch. This increases coremark by about 2%.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This changes s0 to use the P register rather than the A/B/C input
registers, thus improving the timing of the multiplier output. The
m00, m02 and m03 multipliers now use their P registers rather than the
M registers, moving the addition they do from the second cycle to the
first.
Also, the XOR that inverts the 32 LSBs is moved before the output
register.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
A non-cacheable load should only load the data requested and no more. We
do the right thing for real mode cache inhibited storage instructions,
but when loading through a non-cacheable PTE we load the entire 64 bits
regardless of the size.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
The row in the decode table for isel with BC=0 was inadvertently left
marked as single-issue by commit 813f834012 ("Add CR hazard
detection", 2019-10-15). Fix it.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
- mcrxrx put the bits in the wrong order
- addpcis was setting CR0 if the instruction bit 0 = 1, which it
shouldn't
- bpermd was producing 0 always and additionally had the wrong bit
numbering
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This checks that the store forwarding machinery in the dcache
correctly combines forwarded stores when they are partial stores
(i.e. only writing part of the doubleword, as for a byte store).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
We have two stages of forwarding to cover the two cycles of latency
between when something is written to BRAM and when that new data can
be read from BRAM. When the writes to BRAM result from store
instructions, the write may write only some bytes of a row (8 bytes)
and not others, so we have a mask to enable only the written bytes to
be forwarded. However, we only forward written data from either the
first stage of forwarding or the second, not both. So if we have
two stores in succession that write different bytes of the same row,
and then a load from the row, we will only forward the data from the
second store, and miss the data from the first store; thus the load
will get the wrong value.
To fix this, we make the decision on which forward stage to use for
each byte individually. This results in a 4-input multiplexer feeding
r1.data_out, with its inputs being the BRAM, the wishbone, the current
write data, and the 2nd-stage forwarding register. Each byte of the
multiplexer is separately controlled. The code for this multiplexer
is moved to the dcache_fast_hit process since it is used for cache
hits as well as cache misses.
This also simplifies the BRAM code by ensuring that we can use the
same source for the BRAM address and way selection for writes, whether
we are writing store data or cache line refill data from memory.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This moves the way multiplexer for the data from the BRAM, and the
multiplexers for forwarding data from earlier stores or refills,
before a clock edge rather than after, so that now the data output
from the dcache comes from a clean latch. To do this we remove the
extra latch on the output of the data BRAM (i.e. ADD_BUF=false) and
rearrange the logic. The choice whether to forward or not now depends
not on way comparisons but rather on a tag comparisons, for the sake
of timing.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>