This makes the 64-bit wishbone buses have the address expressed in
units of doublewords (64 bits), and similarly for the 32-bit buses the
address is in units of words (32 bits). This is to comply with the
wishbone spec. Previously the addresses on the wishbone buses were in
units of bytes regardless of the bus data width, which is not correct
and caused problems with interfacing with externally-generated logic.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
litedram ignores a couple of signals of his "pseudo-axi" port,
this adds a bit of documentation around it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Make the DRAM data lines and user port width configurable, also
don't hard wire dependency on the wishbone data width.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This implements in the L2 cache the feature already in the L1s
allowing a request to be completed before the end of a refill
using partial line valid bits, and starting a refill from the
row of the first miss on that line instead of the beginning of
the line.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
There is a long timing path to generate the ack signal from
the L2 cache as it's fully combinational for stores, including
signals coming from litedram.
Instead, pipeline the store acks. This will introduce a cycle
latency but should improve timing. Also the core will eventually
be smart enough not to wait for store acks to complete them anyway.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This breaks the long stall signal coming back to the processor
and helps improve overall timing.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Currently, there's a huge mux gathering the output of all the PLRUs
to select the victim way on cache miss. This is fed combinationally
into the clearing of the valid and tags.
In order to help timing, let's store it instead and perform the
clearing on the next cycle. The L2 doesn't respond to requests
when not in IDLE state so this should have no negative effects.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
We still had some wires bringing an extra serial port out of
litedram for the built-in riscv processor. This is all gone now
so take them out.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This increases the number of L2 lines from 32 to 64. The BRAM usage is the
same as they were only half used. There's an increase in LUTs and registers
due to the extra tags and valid bits, but none of it should be in a
space constrained or critical timing path.
We could make it wider instead (256 bytes lines) which would reduce usage
instead, but this increases the latency by 8 cycles. Something to consider
once the L2 is capable of early response on miss and starting reloads
from any point in a line.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Support for this has bitrotted and would require refactoring of L2 to
be brought back. It's also not really needed anymore now that we ship
pre-generated litedram and that LiteX supports what we do.
So take it out, which simplifies some of the scripts as well. This also
fixes up CSR alignment the sim model.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The test bench test simple access forms for now, it's a starting point
but it already helped find/fix a bug.
Includes a litedram update to be able to operate the sim model without
inits.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This adds a cache between the wishbone and litedram with the following
features (at this point, it's still evolving)
- 128 bytes line width in order to have a reasonable amount of
litedram pipelining on the 128-bit wide data port.
- Configurable geometry otherwise
- Stores are acked immediately on wishbone whether hit or miss
(minus a 2 cycles delay if there's a previous load response in the
way) and sent to LiteDRAM via 8 entries (configurable) store queue
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This adds an option to disable the main BRAM and instead copy a
payload stashed along with the init code in the secondary BRAM
into DRAM and boot from there
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This adds a simulated litedram model along with the necessary
Makefile gunk to verilate it and wrap it for use by ghdl.
The core_dram_tb test bench is a variant of core_tb with
LiteDRAM simulated. It's not built by default, an explicit
make core_dram_tb
is necessary as to not require verilator to be installed for
the normal build process (also it's slow'ish).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This comes in two parts:
- A generator script which uses LiteX to generate litedram cores
along with their init files for various boards (currently Arty and
Nexys-video). This comes with configs for arty and nexys_video.
- A fusesoc "generator" which uses pre-generated litedram cores
The generation process is manual on purpose. This include pre-generated
cores for the two above boards.
This is done so that one doesn't have to install LiteX to build
microwatt. In addition, the generator script or wrapper vhdl tend to
break when LiteX changes significantly which happens.
This is still rather standalone and hasn't been plumbed into the SoC
or the FPGA toplevel files yet.
At this point LiteDRAM self-initializes using a built-in VexRiscv
"Minimum" core obtained from LiteX and included in this commit. There
is some plumbing to generate and cores that are initialized by Microwatt
directly but this isn't working yet and so isn't enabled yet.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>