Commit Graph

60 Commits (8dc24416aadac93785fba6262fc555c5b283ab4e)

Author SHA1 Message Date
Benjamin Herrenschmidt 76f61ef823 dcache: Update PLRU on misses as well as hits
The current dcache will not update the PLRU on a cache miss which is later
satisfied during the reload process. Thus subsequent misses will potentially
evict the same cache line. The same issue happens with dcbz which are
treated more/less as load misses.

This fixes it by triggering a PLRU update when r1.choose_victim, which is
set on a miss for one cycle to snapshot the PLRU output. This means we will
update the PLRU on the same cycle as we capture its output, which is fine
(the new value will be visible on the next cycle).

That way, a "miss" will result in a PLRU update to reflect that the entry
being refilled is actually used (and will be used to serve subsequent
load operations from the same cache line while being refilled).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2 years ago
Benjamin Herrenschmidt 3edbbf5f18 Fix dcache_tb (and add dump of victim way to dcache)
It bitrotted... more signals need to be initialized. This also adds
a lot more accesses with different timing conditions allowing to
test cases of hit during reloads, hit with reload formward, hit on idle
cache etc...

It also exposes a bug where the cache miss caused by the read of 0x140
uses the same victim way as previous cache miss of 0x40 (same index).

This bug will need to be fixed separately, but at least this exposes it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2 years ago
Paul Mackerras a1f5867919 dcache: Split PLRU into storage and logic
Rather than having update and decode logic for each individual PLRU
as well as a register to store the current PLRU state, we now put the
PLRU state in a little RAM, which will typically use LUT RAM on FPGAs,
and have just a single copy of the logic to calculate the pseudo-LRU
way and to update the PLRU state.

The PLRU RAM that apples to the data storage (as opposed to the TLB)
is read asynchronously in the cycle after the cache tag matching is
done.  At the end of that cycle the PLRU RAM entry is updated if the
access was a cache hit, or a victim way is calculated and stored if
the access was a cache miss.  It is possible that a cache miss doesn't
start being handled until later, in which case the stored victim way
is used later when the miss gets handled.

Similarly for the TLB PLRU, the RAM is read asynchronously in the
cycle after a TLB lookup is done, and either updated at the end of
that cycle (for a hit), or a victim is chosen and stored for when the
TLB miss is satisfied.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2 years ago
Paul Mackerras cd2e174113 dcache: Fix compilation with NUM_WAYS and/or TLB_NUM_WAYS = 1
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2 years ago
Paul Mackerras 6fe9dc9640 dcache: Reduce metavalue warnings
Among other changes, this makes the things that were previously
declared as signals of integer base type to be unsigned, since
unsigned can carry metavalues, and hence we can get the checking for
metavalues closer to the uses and therefore restrict the checking to
the situations where the signal really ought to be well defined.
We now have a couple more signals that indicate request validity to
help with that.

Non-fatal asserts have been sprinkled throughout to assist with
determining the cause of warnings from library functions (primarily
NUMERIC_STD.TO_INTEGER and NUMERIC_STD."=").

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2 years ago
Paul Mackerras 795b6e2a6b Remove leftover logic for 16-byte loads and stores
This removes some logic that was previously added for the 16-byte
loads and stores (lq, lqarx, stq, stqcx.) and not completely removed
in commit c9e838b656 ("Remove support for lq, stq, lqarx and
stqcx.", 2022-06-04).

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2 years ago
Paul Mackerras bdd4d04162 Simplify flow control in the dcache and loadstore units
Simplify the flow control by stalling the whole upstream pipeline when
a stage can't proceed, instead of trying to let each stage progress
independently when it can.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Anton Blanchard 39220be311 dcache: remove unused do_write signal
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
3 years ago
Benjamin Herrenschmidt 5cfa65e836 Introduce addr_to_wb() and wb_to_addr() helpers
These convert addresses to/from wishbone addresses, and use them
in parts of the caches, in order to make the code a bit more readable.

Along the way, rename some functions in the caches to make it a bit
clearer what they operate on and fix a bug in the icache STOP_RELOAD state where
the wb address wasn't properly converted.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
3 years ago
Benjamin Herrenschmidt d745995207 Introduce real_addr_t and addr_to_real()
This moves REAL_ADDR_BITS out of the caches and defines a real_addr_t
type for a real address, along with a addr_to_real() conversion helper.

It makes the vhdl a bit more readable

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
3 years ago
Paul Mackerras 70270c066a dcache: Fix bug with dcbz closely following stores with the same tag
This fixes a bug where a dcbz can get incorrectly handled as an
ordinary 8-byte store if it arrives while the dcache state machine is
handling other stores with the same tag value (i.e. within the same
set-sized area of memory).  The logic that says whether to include a
new store in the current wishbone cycle didn't take into account
whether the new store was a dcbz.  This adds a "req.dcbz = '0'" factor
so that it does.  This is necessary because dcbz is handled more like
a cache line refill (but writing to memory rather than reading) than
an ordinary store.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Paul Mackerras ca4eb46aea Make wishbone addresses be in units of doublewords or words
This makes the 64-bit wishbone buses have the address expressed in
units of doublewords (64 bits), and similarly for the 32-bit buses the
address is in units of words (32 bits).  This is to comply with the
wishbone spec.  Previously the addresses on the wishbone buses were in
units of bytes regardless of the bus data width, which is not correct
and caused problems with interfacing with externally-generated logic.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Anton Blanchard b29c58f3d1 dcache: Loads from non-cacheable PTEs load entire 64 bits
A non-cacheable load should only load the data requested and no more. We
do the right thing for real mode cache inhibited storage instructions,
but when loading through a non-cacheable PTE we load the entire 64 bits
regardless of the size.

Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
3 years ago
Paul Mackerras 0b23a5e760 dcache: Simplify data input to improve timing
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Paul Mackerras 1a9834c506 dcache: Fix bug with forwarding of stores
We have two stages of forwarding to cover the two cycles of latency
between when something is written to BRAM and when that new data can
be read from BRAM.  When the writes to BRAM result from store
instructions, the write may write only some bytes of a row (8 bytes)
and not others, so we have a mask to enable only the written bytes to
be forwarded.  However, we only forward written data from either the
first stage of forwarding or the second, not both.  So if we have
two stores in succession that write different bytes of the same row,
and then a load from the row, we will only forward the data from the
second store, and miss the data from the first store; thus the load
will get the wrong value.

To fix this, we make the decision on which forward stage to use for
each byte individually.  This results in a 4-input multiplexer feeding
r1.data_out, with its inputs being the BRAM, the wishbone, the current
write data, and the 2nd-stage forwarding register.  Each byte of the
multiplexer is separately controlled.  The code for this multiplexer
is moved to the dcache_fast_hit process since it is used for cache
hits as well as cache misses.

This also simplifies the BRAM code by ensuring that we can use the
same source for the BRAM address and way selection for writes, whether
we are writing store data or cache line refill data from memory.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Paul Mackerras f812832ad7 dcache: Move way selection and forwarding earlier
This moves the way multiplexer for the data from the BRAM, and the
multiplexers for forwarding data from earlier stores or refills,
before a clock edge rather than after, so that now the data output
from the dcache comes from a clean latch.  To do this we remove the
extra latch on the output of the data BRAM (i.e. ADD_BUF=false) and
rearrange the logic.  The choice whether to forward or not now depends
not on way comparisons but rather on a tag comparisons, for the sake
of timing.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Paul Mackerras 65c43b488b PMU: Add several more events
This implements most of the architected PMU events.  The ones missing
are mostly the ones that depend on which level of the cache hierarchy
data is fetched from.  The events implemented here, and their raw
event codes, are:

    Floating-point operation completed (100f4)
    Load completed (100fc)
    Store completed (200f0)
    Icache miss (200fc)
    ITLB miss (100f6)
    ITLB miss resolved (400fc)
    Dcache load miss (400f0)
    Dcache load miss resolved (300f8)
    Dcache store miss (300f0)
    DTLB miss (300fc)
    DTLB miss resolved (200f6)
    No instruction available and none being executed (100f8)
    Instruction dispatched (200f2, 300f2, 400f2)
    Taken branch instruction completed (200fa)
    Branch mispredicted (400f6)
    External interrupt taken (200f8)

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Paul Mackerras 4c11c9c661 dcache: Simplify logic in RELOAD_WAIT_ACK state
Since the expression is_last_row(r1.store_row, r1.end_row_ix) can only
be true when stbs_done is true, there is no need to include stbs_done
in the expression for the reload being completed, and hence no need to
compute stbs_done in the RELOAD_WAIT_ACK state.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras eb7eba2d92 dcache: Snoop writes to memory by other agents
This adds a path where the wishbone that goes out to memory and I/O
also gets fed back to the dcache, which looks for writes that it
didn't initiate, and invalidates any cache line that gets written to.

This involves a second read port on the cache tag RAM for looking up
the snooped writes, and effectively a second write port on the cache
valid bit array to clear bits corresponding to snoop hits.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras f636bb7c39 dcache: Fix bugs in pipelined operation
This fixes two bugs which show up when multiple operations are in
flight in the dcache, and adds a 'hold' input which will be needed
when loadstore1 is pipelined.

The first bug is that dcache needs to sample the data for a store on
the cycle after the store request comes in even if the store request
is held up because of a previous request (e.g. if the previous request
is a load miss or a dcbz).

The second bug is that a load request coming in for a cache line being
refilled needs to be handled immediately in the case where it is for
the row whose data arrives on the same cycle.  If it is not, then it
will be handled as a separate cache miss and the cache line will be
refilled again into a different way, leading to two ways both being
valid for the same tag.  This can lead to data corruption, in the
scenario where subsequent writes go to one of the ways and then that
way gets displaced but the other way doesn't.  This bug could in
principle show up even without having multiple operations in flight in
the dcache.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras 6427cab46f loadstore1/dcache: Send store data one cycle later
This makes timing easier and also means that store floating-point
single precision instructions no longer need to take an extra cycle.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras 4b2c23703c core: Implement quadword loads and stores
This implements the lq, stq, lqarx and stqcx. instructions.

These instructions all access two consecutive GPRs; for example the
"lq %r6,0(%r3)" instruction will load the doubleword at the address
in R3 into R7 and the doubleword at address R3 + 8 into R6.  To cope
with having two GPR sources or destinations, the instruction gets
repeated at the decode2 stage, that is, for each lq/stq/lqarx/stqcx.
coming in from decode1, two instructions get sent out to execute1.

For these instructions, the RS or RT register gets modified on one
of the iterations by setting the LSB of the register number.  In LE
mode, the first iteration uses RS|1 or RT|1 and the second iteration
uses RS or RT.  In BE mode, this is done the other way around.  In
order for decode2 to know what endianness is currently in use, we
pass the big_endian flag down from icache through decode1 to decode2.
This is always in sync with what execute1 is using because only rfid
or an interrupt can change MSR[LE], and those operations all cause
a flush and redirect.

There is now an extra column in the decode tables in decode1 to
indicate whether the instruction needs to be repeated.  Decode1 also
enforces the rule that lq with RT = RT and lqarx with RA = RT or
RB = RT are illegal.

Decode2 now passes a 'repeat' flag and a 'second' flag to execute1,
and execute1 passes them on to loadstore1.  The 'repeat' flag is set
for both iterations of a repeated instruction, and 'second' is set
on the second iteration.  Execute1 does not take asynchronous or
trace interrupts on the second iteration of a repeated instruction.

Loadstore1 uses 'next_addr' for the second iteration of a repeated
load/store so that we access the second doubleword of the memory
operand.  Thus loadstore1 accesses the doublewords in increasing
memory order.  For 16-byte loads this means that the first iteration
writes GPR RT|1.  It is possible that RA = RT|1 (this is a legal
but non-preferred form), meaning that if the memory operand was
misaligned, the first iteration would overwrite RA but then the
second iteration might take a page fault, leading to corrupted state.
To avoid that possibility, 16-byte loads in LE mode take an
alignment interrupt if the operand is not 16-byte aligned.  (This
is the case anyway for lqarx, and we enforce it for lq as well.)

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras 784d409999 dcache: Add more commentary, no code change
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras 128fe8ac26 dcache: Ease timing on wishbone data and byte selects
This eliminates a path where the inputs to r1.wb.dat and r1.wb.sel
depend on req_op, which depends on the TLB and cache hit detection.
In fact they only need to depend on the nature of the request in
r0.req (i.e. DCBZ, store, cacheable load, or non-cacheable load).
This sets them at the beginning of the code for IDLE state rather
than inside the req_op case statement.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras c180ed0af0 dcache: Output separate done-without-error and error-done signals
This reduces the complexity of the logic in the places where these
signals are used.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras 56420e74f3 dcache: Ease timing on calculation of acks remaining
This moves the incrementing or decrementing of r1.acks_pending
to the cycle after a strobe is output or an ack is seen on the
wishbone, and simplifies the logic that determines whether the
cycle is now complete.  This means that the path from seeing
req_op equal to OP_STORE_HIT or OP_STORE_MISS to setting r1.state
and r1.cyc now just involves the stbs_done bit rather than a more
complex calculation involving the possibly incremented r1.acks_pending.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras dc8980d5a5 dcache: Improve timing of valid/done outputs
This makes d_out.valid and m_out.done come directly from registers in
order to improve timing.  The inputs to the registers are set by the
same conditions that cause r1.hit_load_valid, r1.slow_valid,
r1.error_done and r1.stcx_fail to be set.

Note that the STORE_WAIT_ACK state doesn't test r1.mmu_req but assumes
that the request came from loadstore1.  This is because we normally
have r1.full = 0 in this state, which means that r1.mmu_req can
change at any time.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras 893d2bc6a2 core: Don't generate logic for log data when LOG_LENGTH = 0
This adds "if LOG_LENGTH > 0 generate" to the places in the core
where log output data is latched, so that when LOG_LENGTH = 0 we
don't create the logic to collect the data which won't be stored.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 1be6fbac33 dcache: Remove dependency of r1.wb.adr/dat/sel on req_op
This improves timing by setting r1.wb.{adr,dat,sel} to the next
request when doing a write cycle on the wishbone before we know
whether the next request has a TLB and cache hit or not, i.e.
without depending on req_op.  r1.wb.stb still depends on req_op.

This contains a workaround for what is probably a bug elsewhere,
in that changing r1.wb.sel unconditionally once we see stall=0
from the wishbone causes incorrect behaviour.  Making it
conditional on there being a valid following request appears
to fix the problem.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras c01e1c7b91 dcache: Update TLB PLRU one cycle later
This puts the inputs to the TLB PLRU through a register stage, so
the TLB PLRU update is done in the cycle after the TLB tag
matching rather than the same cycle.  This improves timing.
The PLRU output is only used when writing the TLB in response to
a tlbwe request from the MMU, and that doesn't happen within one
cycle of a virtual-mode load or store, so the fact that the
tlb victim way information is delayed by one cycle doesn't
create any problems.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 31587affb3 dcache: Do PLRU update one cycle later
This does the PLRU update based on r1.cache_hit and r1.hit_way rather
than req_op and req_hit_way, which means there is now a register
between the TLB and cache tag lookup and the PLRU update, which should
help with timing.

The PLRU victim selection now becomes valid one cycle later, in the
cycle where r1.write_tag = 1.  We now have replace_way coming from
the PLRU when r1.write_tag = 1 and from r1.store_way at other times,
and we use that instead of r1.store_way in situations where we need
it to be valid in the first cycle of the RELOAD_WAIT_ACK state.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras a4500c63a2 dcache: Reduce back-to-back store latency from 3 cycles to 2
This uses the machinery we already had for comparing the real address
of a new request with the tag of a previous request (r1.reload_tag)
to get better timing on comparing the address of a second store with
the one in progress.  The comparison is now on the set size rather
than the page size, but since set size can't be larger than the page
size (and usually will equal the page size), that is OK.

The same comparison can also be used to tell when we can satisfy
a load miss during a cache line refill.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras b595963233 dcache: Reduce latencies and improve timing
This implements various improvements to the dcache with the aim of
making it go faster.

- We can now execute operations that don't need to access main memory
  (cacheable loads that hit in the cache and TLB operations) as soon
  as any previous operation has completed, without waiting for the
  state machine to become idle.

- Cache line refills start with the doubleword that is needed to
  satisfy the load that initiated them.

- Cacheable loads that miss return their data and complete as soon as
  the requested doubleword comes back from memory; they don't wait for
  the refill to finish.

- We now have per-doubleword valid bits for the cache line being
  refilled, meaning that if a load comes in for a line that is in the
  process of being refilled, we can return the data and complete it
  within a couple of cycles of the doubleword coming in from memory.

- There is now a bypass path for data being written to the cache RAM
  so that we can do a store hit followed immediately by a load hit to
  the same doubleword.  This also makes the data from a refill
  available to load hits one cycle earlier than it would be otherwise.

- Stores complete in the cycle where their wishbone operation is
  initiated, without waiting for the wishbone cycle to complete.

- During the wishbone cycle for a store, if another store comes in
  that is to the same page, and we don't have a stall from the
  wishbone, we can send out the write for the second store in the same
  wishbone cycle and without going through the IDLE state first.  We
  limit it to 7 outstanding writes that have not yet been
  acknowledged.

- The cache tag RAM is now read on a clock edge rather than being
  combinatorial for reading.  Its width is rounded up to a multiple of
  8 bits per way so that byte enables can be used for writing
  individual tags.

- The cache tag RAM is now written a cycle later than previously, in
  order to ease timing.

- Data for a store hit is now written one cycle later than
  previously.  This eases timing since we don't have to get through
  the tag matching and on to the write enable within a single cycle.
  The 2-stage bypass path means we can still handle a load hit on
  either of the two cycles after the store and return the correct
  data.  (A load hit 3 or more cycles later will get the correct data
  from the BRAM.)

- Operations can sit in r0 while there is an uncompleted operation in
  r1.  Once the operation in r1 is completed, the operation in r0
  spends one cycle in r0 for TLB/cache tag lookup and then gets put
  into r1.req.  This can happen before r1 gets to the IDLE state.
  Some operations can then be completed before r1 gets to the IDLE
  state - a load miss to the cache line being refilled, or a store to
  the same page as a previous store.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 49a4d9f67a Add core logging
This logs 256 bits of data per cycle to a ring buffer in BRAM.  The
data collected can be read out through 2 new SPRs or through the
debug interface.

The new SPRs are LOG_ADDR (724) and LOG_DATA (725).  LOG_ADDR contains
the buffer write pointer in the upper 32 bits (in units of entries,
i.e. 32 bytes) and the read pointer in the lower 32 bits (in units of
doublewords, i.e. 8 bytes).  Reading LOG_DATA gives the doubleword
from the buffer at the read pointer and increments the read pointer.
Setting bit 31 of LOG_ADDR inhibits the trace log system from writing
to the log buffer, so the contents are stable and can be read.

There are two new debug addresses which function similarly to the
LOG_ADDR and LOG_DATA SPRs.  The log is frozen while either or both of
the LOG_ADDR SPR bit 31 or the debug LOG_ADDR register bit 31 are set.

The buffer defaults to 2048 entries, i.e. 64kB.  The size is set by
the LOG_LENGTH generic on the core_debug module.  Software can
determine the length of the buffer because the length is ORed into the
buffer write pointer in the upper 32 bits of LOG_ADDR.  Hence the
length of the buffer can be calculated as 1 << (31 - clz(LOG_ADDR)).

There is a program to format the log entries in a somewhat readable
fashion in scripts/fmt_log/fmt_log.c.  The log_entry struct in that
file describes the layout of the bits in the log entries.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Benjamin Herrenschmidt ecaa5e2fb2 dcache: Rework RAM wrapper to synthetize better on Xilinx
The global wr_en signal is causing Vivado to generate two TDP (True Dual Port)
block RAMs instead of one SDP (Simple Dual Port) for each cache way. Remove
it and instead apply a AND to the individual byte write enables.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Paul Mackerras eca0fb5bf1 dcache: Fix bug in store hit after dcbz case
This fixes a bug where a store that hits in the dcache immediately
following a dcbz has its write to the cache RAM suppressed (but not
its write to memory).  If a load to the same location comes along
before the cache line gets replaced, the load will return incorrect
data.

Fixes: 4db1676ef8 ("dcache: Don't assert on dcbz cache hit")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras a658766fcf Implement slbia as a dTLB/iTLB flush
Slbia (with IH=7) is used in the Linux kernel to flush the ERATs
(our iTLB/dTLB), so make it do that.

This moves the logic to work out whether to flush a single entry
or the whole TLB from dcache and icache into mmu.  We now invalidate
all dTLB and iTLB entries when the AP (actual pagesize) field of
RB is non-zero on a tlbie[l], as well as when IS is non-zero.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras dee3783d79 MMU: Remove software-loaded dTLB mode
This removes the hack where the tlbie instruction could be used to
load entries directly into the dTLB, because we don't report the
correct DSISR values for accesses that hit software-loaded dTLB
entries and have privilege or permission errors.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 3eb07dc637 MMU: Refetch PTE on access fault
This is required by the architecture.  It means that the error bits
reported in DSISR or SRR1 now come from the permission/RC check done
on the refetched PTE rather than the TLB entry.  Unfortunately that
somewhat breaks the software-loaded TLB mode of operation in that
DSISR/SRR1 always report no PTE rather than permission error or
RC failure.

This also restructures the loadstore1 state machine a bit, combining
the FIRST_ACK_WAIT and LAST_ACK_WAIT states into a single state and
the MMU_LOOKUP_1ST and MMU_LOOKUP_LAST states likewise.  We now have a
'dwords_done' bit to say whether the first transfer of two (for an
unaligned access) has been done.

The cache paradox error (where a non-cacheable access finds a hit in
the cache) is now the only cause of DSI from the dcache.  This should
probably be a machine check rather than DSI in fact.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 4e6fc6811a MMU: Implement radix page table machinery
This adds the necessary machinery to the MMU for it to do radix page
table walks.  The core elements are a shifter that can shift the
address right by between 0 and 47 bits, a mask generator that can
generate a mask of between 5 and 16 bits, a final mask generator,
and new states in the state machine.

(The final mask generator is used for transferring bits of the
original address into the resulting TLB entry when the leaf PTE
corresponds to a page size larger than 4kB.)

The hardware does not implement a partition table or a process table.
Software is expected to load the appropriate process table entry
into a new SPR called PGTBL0, SPR 720.  The contents should be
formatted as described in Book III section 5.7.6.2 of the Power ISA
v3.0B.  PGTBL0 is set to 0 on hard reset.  At present, the top two bits
of the address (the quadrant) are ignored.

There is currently no caching of any step in the translation process
or of the final result, other than the entry created in the dTLB.
That entry is a 4k page entry even if the leaf PTE found in the walk
corresponds to a larger page size.

This implementation can handle almost any page table layout and any
page size.  The RTS field (in PGTBL0) can have any value between 0
and 31, corresponding to a total address space size between 2^31
and 2^62 bytes.  The RPDS field of PGTBL0 can be any value between
5 and 16, except that a value of 0 is taken to disable radix page
table walking (for use when one is using software loading of TLB
entries).  The NLS field of the page directory entries can have any
value between 5 and 16.  The minimum page size is 4kB, meaning that
the sum of RPDS and the NLS values of the PDEs found on the path to
a leaf PTE must be less than or equal to RTS + 31 - 12.

The PGTBL0 SPR is in the mmu module; thus this adds a path for
loadstore1 to read and write SPRs in mmu.  This adds code in dcache
to service doubleword read requests from the MMU, as well as requests
to write dTLB entries.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 8160f4f821 Add framework for implementing an MMU
This adds a new module to implement an MMU.  At the moment it doesn't
do very much.  Tlbie instructions now get sent by loadstore1 to mmu,
which sends them to dcache, rather than loadstore1 sending them
directly to dcache.  TLB misses from dcache now get sent by loadstore1
to mmu, which currently just returns an error.  Loadstore1 then
generates a DSI in response to the error return from mmu.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras d47fbf88d1 Implement access permission checks
This adds logic to the dcache to check the permissions encoded in
the PTE that it gets from the dTLB.  The bits that are checked are:

R must be 1
C must be 1 for a store
EAA(0) - if this is 1, MSR[PR] must be 0
EAA(2) must be 1 for a store
EAA(1) | EAA(2) must be 1 for a load

In addition, ATT(0) is used to indicate a cache-inhibited access.

This now implements DSISR bits 36, 38 and 45.

(Bit numbers above correspond to the ISA, i.e. using big-endian
numbering.)

MSR[PR] is now conveyed to loadstore1 for use in permission checking.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 42d0fcc511 Implement data storage interrupts
This adds a path from loadstore1 back to execute1 for reporting
errors, and machinery in execute1 for generating data storage
interrupts at vector 0x300.

If dcache is given two requests in successive cycles and the
first encounters an error (e.g. a TLB miss), it will now cancel
the second request.

Loadstore1 now responds to errors reported by dcache by sending
an exception signal to execute1 and returning to the idle state.
Execute1 then writes SRR0 and SRR1 and jumps to the 0x300 Data
Storage Interrupt vector.  DAR and DSISR are held in loadstore1.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 750b3a8e28 dcache: Implement data TLB
This adds a TLB to dcache, providing the ability to translate
addresses for loads and stores.  No protection mechanism has been
implemented yet.  The MSR_DR bit controls whether addresses are
translated through the TLB.

The TLB is a fixed-pagesize, set-associative cache.  Currently
the page size is 4kB and the TLB is 2-way set associative with 64
entries per set.

This implements the tlbie instruction.  RB bits 10 and 11 control
whether the whole TLB is invalidated (if either bit is 1) or just
a single entry corresponding to the effective page number in bits
12-63 of RB.

As an extension until we get a hardware page table walk, a tlbie
instruction with RB bits 9-11 set to 001 will load an entry into
the TLB.  The TLB entry value is in RS in the format of a radix PTE.

Currently there is no proper handling of TLB misses.  The load or
store will not be performed but no interrupt is generated.

In order to make timing at 100MHz on the Arty A7-100, we compare
the real address from each way of the TLB with the tag from each way
of the cache in parallel (requiring # TLB ways * # cache ways
comparators).  Then the result is selected based on which way hit in
the TLB.  That avoids a timing path going through the TLB EA
comparators, the multiplexer that selects the RA, and the cache tag
comparators.

The hack where addresses of the form 0xc------- are marked as
cache-inhibited is kept for now but restricted to real-mode accesses.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 4db1676ef8 dcache: Don't assert on dcbz cache hit
We can hit the assert for req_op = OP_STORE_HIT and reloading in the
case of dcbz, since it looks like a store.  Therefore we need to
exclude that case from the assert.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 041d6bef60 dcache: Implement the dcbz instruction
This adds logic to dcache and loadstore1 to implement dcbz.  For now
it zeroes a single cache line (by default 64 bytes), not 128 bytes
like IBM Power processors do.

The dcbz operation is performed much like a load miss, except that
we are writing zeroes to memory instead of reading.  As each ack
comes back, we write zeroes to the BRAM instead of data from memory.
In this way we zero the line in memory and also zero the line of
cache memory, establishing the line in the cache if it wasn't already
resident.  If it was already resident then we overwrite the existing
line in the cache.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras b349cc891a loadstore1: Move logic from dcache to loadstore1
So that the dcache could in future be used by an MMU, this moves
logic to do with data formatting, rA updates for update-form
instructions, and handling of unaligned loads and stores out of
dcache and into loadstore1.  For now, dcache connects only to
loadstore1, and loadstore1 now has the connection to writeback.

Dcache generates a stall signal to loadstore1 which indicates that
the request presented in the current cycle was not accepted and
should be presented again.  However, loadstore1 doesn't currently
use it because we know that we can never hit the circumstances
where it might be set.

For unaligned transfers, loadstore1 generates two requests to
dcache back-to-back, and then waits to see two acks back from
dcache (cycles where d_in.valid is true).

Loadstore1 now has a FSM for tracking how many acks we are
expecting from dcache and for doing the rA update cycles when
necessary.  Handling for reservations and conditional stores is
still in dcache.

Loadstore1 now generates its own stall signal back to decode2,
so we no longer need the logic in execute1 that generated the stall
for the first two cycles.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras ef9c1efd72 dcache: Remove LOAD_UPDATE2 state
Since we removed one cycle from the load hit case, we actually no
longer need the extra cycle provided by having the LOAD_UPDATE
state.  Therefore this makes the load hit case in the IDLE and
NEXT_DWORD states go to LOAD_UPDATE2 rather than LOAD_UPDATE.
Then we remove LOAD_UPDATE and then rename LOAD_UPDATE2 to
LOAD_UPDATE.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 81d777be02 dcache: Trim one cycle from the load hit path
Currently we don't get the result from a load that hits in the dcache
until the fourth cycle after the instruction was presented to
loadstore1.  This trims this back to 3 cycles by taking the low order
bits of the address generated in loadstore1 into dcache directly (not
via the output register of loadstore1) and using them to address the
read port of the dcache data RAM.  We use the lower 12 address bits
here in the expectation that any reasonable data cache design will
have a set size of 4kB or less in order to avoid the aliasing problems
that can arise with a virtually-indexed physically-tagged cache if
the set size is greater than the smallest page size provided by the
MMU.

With this we can get rid of r2 and drive the signals going to
writeback from r1, since the load hit data is now available one
cycle earlier.  We need a multiplexer on the read address of the
data cache RAM in order to handle the second doubleword of an
unaligned access.

One small complication is that we now need an extra cycle in the case
of an unaligned load which misses in the data cache and which reads
the 2nd-last and last doublewords of a cache line.  This is the reason
for the PRE_NEXT_DWORD state; if we just go straight to NEXT_DWORD
then we end up having the write of the last doubleword of the cache
line and the read of that same doubleword occurring in the same
cycle, which means we read stale data rather than the just-fetched
data.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 5d85ede97d dcache: Implement load-reserve and store-conditional instructions
This involves plumbing the (existing) 'reserve' and 'rc' bits in
the decode tables down to dcache, and 'rc' and 'store_done' bits
from dcache to writeback.

It turns out that we had 'RC' set in the 'rc' column for several
ordinary stores and for the attn instruction.  This corrects them
to 'NONE', and sets the 'rc' column to 'ONE' for the conditional
stores.

In writeback we now have logic to set CR0 when the input from dcache
has rc = 1.

In dcache we have the reservation itself, which has a valid bit
and the address down to cache line granularity.  We don't currently
store the reservation length.  For a store conditional which fails,
we set a 'cancel_store' signal which inhibits the write to the
cache and prevents the state machine from starting a bus cycle or
going to the STORE_WAIT_ACK state.  Instead we set r1.stcx_fail
which causes the instruction to complete in the next cycle with
rc=1 and store_done=0.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago