Commit Graph

675 Commits (49f1389a2196a9daf756f2a92a3af0c01722d539)
 

Author SHA1 Message Date
Boris Shingarov 49f1389a21 Fix ld error in elf maketarget
The sdram_init ELF fails to link:

powerpc64le-linux-gnu-ld -static -nostdlib -T sdram_init.lds \
    --gc-sections -o sdram_init.elf head.o main.o sdram.o console.o \
    libc.o sdram_init.lds
powerpc64le-linux-gnu-ld: error: linker script file 'sdram_init.lds'
    appears multiple times
make: *** [Makefile:70: sdram_init.elf] Error 1

This is because sdram_init.lds is one of the prerequisites, and thus is
contained in $^.  However, it is also explicitly specified as part of
LDFLAGS, as the argument to -T.

Signed-off-by: Boris Shingarov <shingarov@labware.com>
5 years ago
Michael Neuling 7566f04fe3
Merge pull request #211 from shenki/spi-constraint
spi: Fix dat_i_l constraints
5 years ago
Joel Stanley 60e5f7b958 spi: Fix dat_i_l constraints
No cells matched 'get_cells -hierarchical -filter {NAME =~*/spi_rxtx/dat_i_l*}'. [build/microwatt_0/src/microwatt_0/fpga/arty_a7.xdc:42]

The signal is in it's own process so the net name ends up being
spi_rxtx/input_delay_1.dat_i_l_reg.

After this change the log shows:

Applied set_property IOB = TRUE for soc0/\spiflash_gen.spiflash /spi_rxtx/\input_delay_1.dat_i_l_reg . (constraint file  fpga/arty_a7.xdc, line 42).
Applied set_property IOB = TRUE for soc0/\spiflash_gen.spiflash /spi_rxtx/\input_delay_1.dat_i_l_reg . (constraint file  fpga/arty_a7.xdc, line 42).
Applied set_property IOB = TRUE for soc0/\spiflash_gen.spiflash /spi_rxtx/\input_delay_1.dat_i_l_reg . (constraint file  fpga/arty_a7.xdc, line 42).
Applied set_property IOB = TRUE for soc0/\spiflash_gen.spiflash /spi_rxtx/\input_delay_1.dat_i_l_reg . (constraint file  fpga/arty_a7.xdc, line 42).

Signed-off-by: Joel Stanley <joel@jms.id.au>
5 years ago
Michael Neuling 695e081c35
Merge pull request #210 from ozbenh/xics
xics: Cleanups and add a simple ICS for use by Linux
5 years ago
Benjamin Herrenschmidt bb54af59de xics: Add support for reduced priority field size
This makes the ICS support less than the 8 architected bits
and sets the soc to use 3 bits by default.

All the supported bits set translates to "masked" (and will read
back at 0xff), any small value is used as-is.

Linux doesn't use priorities above 5, so this is a way to save
silicon. The number of supported priority bits is exposed to the
OS via the config register.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 5c2fc47e2c xics: Add simple ICS
Move the external interrupt generation to a separate module
"ICS" (source controller) which a register per source containing
currently only the priority control.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 8080168327 xics/icp: MFRR starts at 0xff not 0x00
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 0b82024b01 tests/xics: Ensure no compiler optimisations in delay()
In case it would be tempted to "read ahead" the delay function

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 0fa14f6dec xics: ICP should be big endian !
That's how Linux expects it. This also simplifies the
register access implementation since the bit fields now
align properly regardless of the access size.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 311b653d80 tests: Fix Makefile.test to not allow host includes
xics was including the host limits.h for example

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Michael Neuling b90a0a2139
Merge pull request #208 from paulusmack/faster
Make the core go faster

Several major improvements in here:
- Simple branch predictor
- Reduced latency for mispredicted branches and interrupts by removing fetch2 stage
- Cache improvements
  o Request critical dword first on refill
  o Handle hits while refilling, including on line being refilled
  o Sizes doubled for both D and I
- Loadstore improvements: can now do one load or store every two cycles in most cases
- Optimized 2-cycle multiplier for Xilinx 7-series parts using DSP slices
- Timing improvements, including:
  o Stash buffer in decode1
  o Reduced width of execute1 result mux
  o Improved SPR decode in decode1
  o Some non-critical operation take a cycle longer so we can break some long combinatorial chains
- Core logging: logs 256 bits of info every cycle into a ring buffer, to help with debugging and performance analysis

This increases the LUT usage for the "synth" + A35 target from 9182 to 10297 = 12%.
5 years ago
Paul Mackerras 1fedc7a86a
Merge pull request #207 from ozbenh/misc
Random cleanups of the SoC interfaces
5 years ago
Paul Mackerras 64efd494e5 fpga: Add a xilinx_specific fileset to microwatt.core
At present this just has the Xilinx-specific multiplier code, but
might in future have other things.

This also adds the xilinx_specific fileset to the synth target.
Without that it was failing because there was no multiplier.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 78de4fef72 Make LOG_LENGTH configurable per FPGA variant
This plumbs the LOG_LENGTH parameter (which controls how many entries
the core log RAM has) up to the top level so that it can be set on
the fusesoc command line and have different default values on
different FPGAs.

It now defaults to 512 entries generally and on the Artix-7 35 parts,
and 2048 on the larger Artix-7 FPGAs.  It can be set to 0 if desired.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras ec2fa61792 execute1: Reduce width of the result mux to help timing
This reduces the number of different things that are assigned to
the result variable.

- The computations for the popcnt, prty, cmpb and exts instruction
  families are moved into the logical unit.
- The result of mfspr from the slow SPRs is computed in 'spr_val'
  before being assigned to 'result'.
- Writes to LR as a result of a blr or bclr instruction are done
  through the exc_write path to writeback.

This eases timing considerably.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 6687aae4d6 core: Implement a simple branch predictor
This implements a simple branch predictor in the decode1 stage.  If it
sees that the instruction is b or bc and the branch is predicted to be
taken, it sends a flush and redirect upstream (to icache and fetch1)
to redirect fetching to the branch target.  The prediction is sent
downstream with the branch instruction, and execute1 now only sends
a flush/redirect upstream if the prediction was wrong.  Unconditional
branches are always predicted to be taken, and conditional branches
are predicted to be taken if and only if the offset is negative.
Branches that take the branch address from a register (bclr, bcctr)
are predicted not taken, as we don't have any way to predict the
branch address.

Since we can now have a mflr being executed immediately after a bl
or bcl, we now track the update to LR in the hazard tracker, using
the second write register field that is used to track RA updates for
update-form loads and stores.

For those branches that update LR but don't write any other result
(i.e. that don't decrementer CTR), we now write back LR in the same
cycle as the instruction rather than taking a second cycle for the
LR writeback.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 09ae2ce58d decode1: Improve timing for slow SPR decode path
This makes the logic that works out decode.unit and decode.sgl_pipe
for mtspr/mfspr to/from slow SPRs detect the fact that the
instruction is mtspr/mfspr based on a match with the instruction
word rather than looking at v.decode.insn_type.  This improves timing
substantially, as the ROM lookup to get v.decode is relatively slow.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras b3799c432b decode1: Add a stash buffer to the output
This means that the busy signal from execute1 (which can be driven
combinatorially from mmu or dcache) now stops at decode1 and doesn't
go on to icache or fetch1.  This helps with timing.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Benjamin Herrenschmidt 67b6117ebf soc: Slight cleanup of IRQ assignments
Use a separate process to assign selected interrupts to the
interrupt array, and document them.

There's only one interrupt *for now* but that will change
and this way is clearer.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt e07b3dd6fa soc: Rename uart_dat8 to uart0_dat8
Just for consistency. Will come in handy if we ever add a second one

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt f9f18906a3 soc: Rename wb_dram_ctrl to wb_ext_io and rework decoding
This makes the control bus currently going out of "soc" towards
litedram more generic for external IO devices added by the
top-level rather than inside the SoC proper.

This is mostly renaming of signals and a small change on how the
address decoder operates, using a separate "cascaded" decode for
the external IOs.

We make the region 0xc8nn_nnnn be the "external IO" region for
now.

This will make it easier / cleaner to add more external devices.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Paul Mackerras a4500c63a2 dcache: Reduce back-to-back store latency from 3 cycles to 2
This uses the machinery we already had for comparing the real address
of a new request with the tag of a previous request (r1.reload_tag)
to get better timing on comparing the address of a second store with
the one in progress.  The comparison is now on the set size rather
than the page size, but since set size can't be larger than the page
size (and usually will equal the page size), that is OK.

The same comparison can also be used to tell when we can satisfy
a load miss during a cache line refill.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Benjamin Herrenschmidt bf7def5503 soc: Don't require dram wishbones signals to be wired by toplevel
Currently, when not using litedram, the top level still has to hook
up "dummy" wishbones to the main dram and control dram busses coming
out of the SoC and provide ack signals.

Instead, make the SoC generate the acks internally when not using
litedram and use defaults to make the wiring entirely optional.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 1ffc89e58b soc: Add defaults for some input signals
That way the top-level's don't need to assign them

Also remove generics that are set to the default anyways

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 4244b54984 soc: Remove unused RESET_LOW generic
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Paul Mackerras aebd915f8f mmu: Take an extra cycle to do TLB invalidations
This makes the TLB invalidations that occur as a result of a tlbie,
slbia or mtspr instruction take one more cycle.  This breaks some
long combinatorial chains from decode2 to dcache and icache and
thus eases timing.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras b595963233 dcache: Reduce latencies and improve timing
This implements various improvements to the dcache with the aim of
making it go faster.

- We can now execute operations that don't need to access main memory
  (cacheable loads that hit in the cache and TLB operations) as soon
  as any previous operation has completed, without waiting for the
  state machine to become idle.

- Cache line refills start with the doubleword that is needed to
  satisfy the load that initiated them.

- Cacheable loads that miss return their data and complete as soon as
  the requested doubleword comes back from memory; they don't wait for
  the refill to finish.

- We now have per-doubleword valid bits for the cache line being
  refilled, meaning that if a load comes in for a line that is in the
  process of being refilled, we can return the data and complete it
  within a couple of cycles of the doubleword coming in from memory.

- There is now a bypass path for data being written to the cache RAM
  so that we can do a store hit followed immediately by a load hit to
  the same doubleword.  This also makes the data from a refill
  available to load hits one cycle earlier than it would be otherwise.

- Stores complete in the cycle where their wishbone operation is
  initiated, without waiting for the wishbone cycle to complete.

- During the wishbone cycle for a store, if another store comes in
  that is to the same page, and we don't have a stall from the
  wishbone, we can send out the write for the second store in the same
  wishbone cycle and without going through the IDLE state first.  We
  limit it to 7 outstanding writes that have not yet been
  acknowledged.

- The cache tag RAM is now read on a clock edge rather than being
  combinatorial for reading.  Its width is rounded up to a multiple of
  8 bits per way so that byte enables can be used for writing
  individual tags.

- The cache tag RAM is now written a cycle later than previously, in
  order to ease timing.

- Data for a store hit is now written one cycle later than
  previously.  This eases timing since we don't have to get through
  the tag matching and on to the write enable within a single cycle.
  The 2-stage bypass path means we can still handle a load hit on
  either of the two cycles after the store and return the correct
  data.  (A load hit 3 or more cycles later will get the correct data
  from the BRAM.)

- Operations can sit in r0 while there is an uncompleted operation in
  r1.  Once the operation in r1 is completed, the operation in r0
  spends one cycle in r0 for TLB/cache tag lookup and then gets put
  into r1.req.  This can happen before r1 gets to the IDLE state.
  Some operations can then be completed before r1 gets to the IDLE
  state - a load miss to the cache line being refilled, or a store to
  the same page as a previous store.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 65a36cc0fc decode: Work out ispr1/ispr2 in parallel with decode ROM lookup
This makes the logic that calculates which SPRs are being accessed
work in parallel with the instruction decode ROM lookup instead of
being dependent on the opcode found in the decode ROM.  The reason
for doing that is that the path from icache through the decode ROM
to the ispr1/ispr2 fields has become a critical path.

Thus we are now using only a very partial decode of the instruction
word in the logic for isp1/isp2, and we therefore can no longer rely
on them being zero in all cases where no SPR is being accessed.
Instead, decode2 now ignores ispr1/ispr2 in all cases except when the
relevant decode.input_reg_a/b or decode.output_reg_a is set to SPR.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 209aa9ce3f loadstore1: Reduce busy cycles
This reduces the number of cycles where loadstore1 asserts its busy
output, leading to increased throughput of loads and stores.  Loads
that hit in the cache can now be executed at the rate of one every two
cycles.  Stores take 4 cycles assuming the wishbone slave responds
with an ack the cycle after we assert strobe.

To achieve this, the state machine code is split into two parts, one
for when we have an existing instruction in progress, and one for
starting a new instruction.  We can now combinatorially clear busy and
start a new instruction in the same cycle that we get a done signal
from the dcache; in other words we are completing one instruction and
potentially writing back results in the same cycle that we start a new
instruction and send its address and data to the dcache.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 1d09daae03 loadstore1: Complete mfspr/mtspr a cycle later
This makes mfspr and mtspr complete (and mfspr write back) on the
cycle after the instruction is received from execute1, rather than
on the same cycle.  This makes them match all other instructions
that execute in one cycle.  Because these instructions are marked
as single-issue, there wasn't the possibility of having two
instructions complete on the same cycle (which we can't cope with),
but it is better to fix this.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 6701e7346b core: Use a busy signal rather than a stall
This changes the instruction dependency tracking so that we can
generate a "busy" signal from execute1 and loadstore1 which comes
along one cycle later than the current "stall" signal.  This will
enable us to signal busy cycles only when we need to from loadstore1.

The "busy" signal from execute1/loadstore1 indicates "I didn't take
the thing you gave me on this cycle", as distinct from the previous
stall signal which meant "I took that but don't give me anything
next cycle".  That means that decode2 proactively gives execute1
a new instruction as soon as it has taken the previous one (assuming
there is a valid instruction available from decode1), and that then
sits in decode2's output until execute1 can take it.  So instructions
are issued by decode2 somewhat earlier than they used to be.

Decode2 now only signals a stall upstream when its output buffer is
full, meaning that we can fill up bubbles in the upstream pipe while a
long instruction is executing.  This gives a small boost in
performance.

This also adds dependency tracking for rA updates by update-form
load/store instructions.

The GPR and CR hazard detection machinery now has one extra stage,
which may not be strictly necessary.  Some of the code now really
only applies to PIPELINE_DEPTH=1.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 62b24a8dae icache: Improve latencies when reloading cache lines
The icache can now detect a hit on a line being refilled from memory,
as we have an array of individual valid bits per row for the line
that is currently being loaded.  This enables the request that
initiated the refill to be satisfied earlier, and also enables
following requests to the same cache line to be satisfied before the
line is completely refilled.  Furthermore, the refill now starts
at the row that is needed.  This should reduce the latency for an
icache miss.

We now get a 'sequential' indication from fetch1, and use that to know
when we can deliver an instruction word using the other half of the
64-bit doubleword that was read last cycle.  This doesn't make much
difference at the moment, but it frees up cycles where we could test
whether the next line is present in the cache so that we could
prefetch it if not.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 0809bc898b multiply: Use DSP48 slices for multiplication on Xilinx FPGAs
This adds a custom implementation of the multiplier which uses 16
DSP48E1 slices to do a 64x64 bit multiplication in 2 cycles.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 9880fc7435 multiply: Move selection of result bits into execute1
This puts the logic that selects which bits of the multiplier result
get written into the destination GPR into execute1, moved out from
multiply.

The multiplier is now expected to do an unsigned multiplication of
64-bit operands, optionally negate the result, detect 32-bit
or 64-bit signed overflow of the result, and return a full 128-bit
result.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras f80da65799 core: Double the dcache and icache sizes
This makes the dcache and icache both be 8kB.  This still only uses
one BRAM per way per cache on the Artix-7, since the BRAMs were only
half-used previously.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras b5a7dbb78d core: Remove fetch2 pipeline stage
The fetch2 stage existed primarily to provide a stash buffer for the
output of icache when a stall occurred.  However, we can get the same
effect -- of having the input to decode1 stay unchanged on a stall
cycle -- by using the read enable of the BRAMs in icache, and by
adding logic to keep the outputs unchanged on a clock cycle when
stall_in = 1.  This reduces branch and interrupt latency by one
cycle.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 49a4d9f67a Add core logging
This logs 256 bits of data per cycle to a ring buffer in BRAM.  The
data collected can be read out through 2 new SPRs or through the
debug interface.

The new SPRs are LOG_ADDR (724) and LOG_DATA (725).  LOG_ADDR contains
the buffer write pointer in the upper 32 bits (in units of entries,
i.e. 32 bytes) and the read pointer in the lower 32 bits (in units of
doublewords, i.e. 8 bytes).  Reading LOG_DATA gives the doubleword
from the buffer at the read pointer and increments the read pointer.
Setting bit 31 of LOG_ADDR inhibits the trace log system from writing
to the log buffer, so the contents are stable and can be read.

There are two new debug addresses which function similarly to the
LOG_ADDR and LOG_DATA SPRs.  The log is frozen while either or both of
the LOG_ADDR SPR bit 31 or the debug LOG_ADDR register bit 31 are set.

The buffer defaults to 2048 entries, i.e. 64kB.  The size is set by
the LOG_LENGTH generic on the core_debug module.  Software can
determine the length of the buffer because the length is ORed into the
buffer write pointer in the upper 32 bits of LOG_ADDR.  Hence the
length of the buffer can be calculated as 1 << (31 - clz(LOG_ADDR)).

There is a program to format the log entries in a somewhat readable
fashion in scripts/fmt_log/fmt_log.c.  The log_entry struct in that
file describes the layout of the bits in the log entries.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras aab84acda8 scripts/mw_debug: Make progress counts display on one line
This outputs a carriage return rather than a newline after the
display of the progress count during the load and save operations.
This makes the output more compact and better looking.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 03f9d7a97e tests/xics: Fix assumption that interrupts happen immediately
Currently the test writes to the XICS and then checks that the
expected interrupt has happened.  This turns into a stbcix
instruction followed immediately by a load from the variable that
indicates whether an interrupt has happened.  It is possible for
it to take a few cycles for the store to reach the XICS and the
interrupt request signal to come back to the core, particularly
with improvements to the load/store unit and dcache.

This therefore adds a delay between storing to the XICS and
checking for the occurrence of an interrupt, so as to give the
signals time to propagate.  The delay loop does an arbitrary 10
iterations, and each iteration does two loads and one store to
(cacheable) memory.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 5a00029519 register_file: Report value being written before asserting it's not X
If a bug causes an indeterminate value to be written to a GPR, an
assert causes simulation to abort.  Move the assert after the report
of the GPR index and value so that we get to know what the bad value
is before the simulation terminates.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 12a257f01e
Merge pull request #205 from ozbenh/timing
Timing improvements
5 years ago
Paul Mackerras bf6cc2a05a
Merge pull request #204 from ozbenh/spi
Add an SPI master flash controller
5 years ago
Benjamin Herrenschmidt 176ae5c306 syscon: Remove combinational loop on ack and stall
Those hurt timings. Instead latch the wishbone response for one cycle

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 6c3a8bf417 bram: Remove combinational loop on stall
It hurts timing and is pointless

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt e5aa0e9dc9 uart: Remove combinational loops on ack and stall signal
They hurt timing forcing signals to come from the master and back
again in one cycle. Stall isn't sampled by the master unless there
is an active cycle so masking it with cyc is pointless. Masking acks
is somewhat pointless too as we don't handle early dropping of cyc
in any of our slaves properly anyways.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 6aadad5a75 spi: Add booting from flash to litedram init
It will look for an ELF binary at the flash offset specified
for the board (currently 0x300000 on Arty but that could be
changed).

Note: litedram is regenerated in order to rebuild the init code,
which was done using a newer version of litedram from LiteX.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt a89e1469ef spi: Add simulation support
This require the s25fl128s.vhd flash model and FMF libraries,
which will be built when passed to the Makefile via the
FLASH_MODEL_PATH argument. Otherwise a dummy module is used
which ties MISO to '1'.

The model isn't included as I'm not sure its licence (GPL) is
at this point, but it can be obtained from

https://github.com/ozbenh/microspi

FLASH_MODEL_PATH=<path to microspi>/model

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Dan Horák 801bd3b8ee
flash-arty: update error message (#203)
Signed-off-by: Dan Horák <dan@danny.cz>
5 years ago
Benjamin Herrenschmidt 9b458a9aa6
dmi: Add ASYNC_REG attribute on synchronizers (#200)
This tells Vivado to keep them close among other things

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt d266c9e67d
icache: Latch PLRU victim output (#199)
This stores the output of the PLRU big mux and clears the
tags and valid bits on the next cycle.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago