Commit Graph

57 Commits (c1f23e7417b25867a3d077a9622ddf8369f3b462)

Author SHA1 Message Date
Paul Mackerras 73b6004ac6 icache: Use next real address to index icache
Now that we are translating the fetch effective address to real one
cycle earlier, we can use the real address to index the icache array.
This has the benefit that the set size can be larger than a page,
enabling us to configure the icache to be larger without having to
increase its associativity.  Previously the set size was limited to
the page size to avoid aliasing problems.  Thus for example a 32kB
icache would need to be 8-way associative, resulting in large numbers
of LUTs being used for tag comparisons in FPGA implementations, and
poor timing.  With this change, a 32kB icache can be 1 or 2-way
associative, which means deeper and narrower tag and data RAMs and
fewer tag comparators.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
1 year ago
Paul Mackerras f9e5622327 Move iTLB from icache to fetch1
This moves the address translation step for instruction fetches one
cycle earlier, so that it now happens in the fetch1 stage.  There is
now a 2-entry mini translation cache ("ERAT", or effective to real
address translation cache) which operates on the output of the
multiplexer that selects the instruction address for the next cycle.
The ERAT consists of two effective address registers and two
corresponding real address registers.  They store the page number part
of the addresses for a 4kB page size, which is the smallest page size
supported by the architecture.

If the effective address doesn't match either of the EA registers, and
address translation is enabled, then i_out.req goes low for two cycles
while the iTLB is looked up.  Experimentally, this delay results in a
0.1% drop in coremark performance; allowing two cycles for the lookup
results in better timing.  The result from the iTLB is placed into the
least recently used ERAT entry and then used to translate the address
as normal.  If address translation is not enabled then the EA is used
directly as the real address.

The iTLB structure is the same as it was before; direct mapped,
indexed using a hashed EA.

The "fetch failed" signal, which indicates a TLB miss or protection
violation, is now generated in fetch1 and passed through icache.
When it is asserted, fetch1 goes into a stalled state until a PTE
arrives from the MMU (which gets put into both the iTLB and the ERAT),
or an interrupt or redirect occurs.

Any TLB invalidations from the MMU invalidate the whole ERAT.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
1 year ago
Paul Mackerras 963c225955 icache: Read icache tag RAM synchronously
This uses the next_nia provided to us by fetch1 to enable the icache
tag RAM to be read synchronously (using a clock edge), which should
enable block RAMs to be used on FPGAs rather than LUT RAM or
flip-flops.  We define a separate RAM per way to avoid any problems
with the tools trying to inference byte write enables for writing to a
single way.

Since next_nia can move on, we only get one shot at reading it the
cache tag RAM entry for the current access.  If it is a miss, then the
state machine will read the cache line from RAM, and we can consider
the access to be a hit once the state machine has brought in the
doubleword we need.  The TLB hit/miss check has been modified to check
r.store_tag rather than the tag read from the tag RAM for this case.

However, it is also possible that stall_in will be asserted for the
whole time until the cache line refill is completed.  To handle this
case, we remember (in r.stalled_hit) that we detected a hit while
stalled, and use that hit once stall_in is deasserted.  This avoids
doing an unnecesary second reload of the same cache line.  The
r.stalled_hit flag gets cleared in CLR_TAG state since that is when
cache tags can be overwritten, meaning that a previously detected hit
might no longer be valid.

There is also the case where the tag read from the tag RAM is the one
we are looking for, and is the same index as the line that is starting
to be reloaded by the state machine.  If the icache gets stalled for
long enough that the line reload finishes, it would then be possible
for the access to be detected as a hit even though the cache line has
been overwritten.  To counter this, we detect the case where the cache
tag RAM entry being read is the same as the entry being written and
set a 'tag_overwrite' flag bit to indicate that one of the tags in
cache_tags_set is no longer valid.

For snooping writes to memory, we have a second read port on the cache
tag RAM.  These tags are also read synchronously, so the logic for
clearing cache line valid bits on a snoop has been adjusted (the tag
comparisons and valid bit clearing now happen in the same cycle).

This also simplifies the expression for 'insn' by removing a
dependency on r.hit_valid, fixes the instruction value sent to the
log, and deasserts stall_out when flush_in is true.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
1 year ago
Paul Mackerras 723008b8c2 icache: Read iTLB using early next NIA from fetch1
Using i_in.next_nia means that we can read the iTLB RAM arrays
synchronously rather than asynchronously, which gives more opportunity
for using block RAMs in FPGA implementations.

The reading is gated by the stall signals because the next_nia can
advance when stalled, but we need the iTLB entry for the instruction
that i_in.nia points to.  If we are stalled because of an iTLB miss,
that means we don't see the new iTLB entry when it is written.
Instead we save the new entry directly when it arrives and use it
instead of the values read from the iTLB RAM.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
1 year ago
Paul Mackerras e92d49375f fetch1: Reorganize fetch1 to provide an asynchronous early next NIA to icache
This adds a next_nia field to the Fetch1ToIcacheType record, which
provides an indication of what will be in the nia field on the next
non-stalled cycle.  This is intended to be as fast as possible, being
a selection from two redirect addresses (from writeback and decode1)
or an internal register (r_int.next_nia).  Reset addresses and
predicted branch targets come through this internal register.

The rearrangement here has the side effect that we can now use the BTC
on the first instruction after a taken branch, whereas previously the
BTC was only active starting with the second instruction after a taken
branch.  This provides a slight improvement in performance.

This also fixes a buglet in icache where it would assert its stall
output when i_in.req was false.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
1 year ago
Paul Mackerras 06ff486567 icache: Restore primary opcode to instruction word
The icache stores a predecoded insn_code value for each instruction,
and so as to fit in 36 bits, omits the primary opcode (the most
significant 6 bits) of each instruction.  Previously, for valid
instructions, the primary opcode field of the instruction delivered to
decode1 was a part-representation of the insn_code value rather than
the actual primary opcode.  This adds a lookup table to compute the
primary opcode from the insn_code and deliver it in the instruction
words supplied to decode1.

In order that each insn_code can be associated with a single primary
opcode value, the various no-operation instructions with primary
opcode 31 (the reserved no-ops and dss, dst and dstst) have been given
a new insn_code, INSN_rnop, leaving INSN_nop for the preferred no-op
(ori r0,r0,0).

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
1 year ago
Paul Mackerras b1b1367cd5 icache: Fix instruction sent to log
Log the instruction read from the icache, not the instruction (if any)
being written to the icache.

Fixes: 6db626d245 ("icache: Log 36 bits of instruction rather than 32")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
1 year ago
Paul Mackerras 4416ebe92e fetch1: Change the way predictions from the BTC are sent downstream
Instead of sending down the predicted taken/not-taken bits with the
target of the branch, we now send them down with the branch itself.
Previously icache adjusted for this by sending the prediction bits to
decode1 without a 1-clock delay while everything else had a 1-clock
delay.  Now icache keeps the prediction bits with the rest of the
attributes for the request.

Also fix a buglet in fetch1 where the first address sent out after
reset didn't have .req set.  Currently this doesn't cause a problem
because icache doesn't really look at .req.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2 years ago
Paul Mackerras 86212dc879 icache: Split PLRU into storage and logic
Rather than having update and decode logic for each individual PLRU
as well as a register to store the current PLRU state, we now put the
PLRU state in a little RAM, which will typically use LUT RAM on FPGAs,
and have just a single copy of the logic to calculate the pseudo-LRU
way and to update the PLRU state.  This logic is in the plrufn module
and is just combinatorial logic.  A new module was created for this as
other parts of the system are still using plru.vhdl.

The PLRU RAM in the icache is read asynchronously in the cycle
after the cache tag matching is done.  At the end of that cycle the
PLRU RAM entry is updated if the access was a cache hit, or a victim
way is calculated and stored if the access was a cache miss and
miss handling is starting in this cycle.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2 years ago
Paul Mackerras 82c8b2eae0 icache: Fix compilation with NUM_WAYS = 1
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2 years ago
Paul Mackerras 221a7b3df0 icache: Reduce metavalue warnings
As in dcache, this changes most signals declared with integer type to
be unsigned bit vectors instead.  Some code has been rearranged to do
to_integer() or equality comparisons only when the relevant signals
should be well defined.  Non-fatal asserts have been sprinkled
throughout to assist with determining the cause of warnings from
library functions (primarily NUMERIC_STD.TO_INTEGER and
NUMERIC_STD."=").

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2 years ago
Paul Mackerras 6db626d245 icache: Log 36 bits of instruction rather than 32
This expands the field in the log buffer that stores the instruction
fetched from the icache to 36 bits, so that we get the insn_code and
illegal instruction indication.  To do this, we reclaim 3 unused bits
from execute1's portion and one other unused bit (previously just set
to 0 in core.vhdl).

This also alters the trigger behaviour to stop after one quarter of
the log buffer has been filled with samples after the trigger, or 256
entries, whichever is less.  This is to ensure that the trigger event
doesn't get overwritten when the log buffer is small.

This updates fmt_log to the new log format.  Valid instructions are
printed as a decimal insn_code value followed by the bottom 26 bits of
the instruction.  Illegal instructions are printed as "ill" followed
by the full 32 bits of the instruction.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2 years ago
Paul Mackerras 21ab36a0c0 Pre-decode instructions when writing them to icache
This splits out the decoding done in the decode0 step into a separate
predecoder, used when writing instructions into the icache.  The
icache now holds 36 bits per instruction rather than 32.  For valid
instructions, those 36 bits comprise the bottom 26 bits of the
instruction word, a 9-bit insn_code value (which uniquely identifies
the instruction), and a zero in the MSB.  For illegal instructions,
the MSB is one and the full instruction word is in the bottom 32 bits.
Having the full instruction word available for illegal instructions
means that it can be printed in the log when simulating, or in future
could be placed in the HEIR register.

If we don't have an FPU, then the floating-point instructions are
regarded as illegal.  In that case, the insn_code values would fit
into 8 bits, which could be used in future to reduce the size of
decode_rom from 512 to 256 entries.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2 years ago
Michael Neuling 404abefd92 Metavalue cleanup for icache.vhdl
Signed-off-by: Michael Neuling <mikey@neuling.org>
2 years ago
Michael Neuling cd52390bf1
Merge pull request #373 from antonblanchard/icache-insn-u-state
icache: Don't output X on i_out.insn
3 years ago
Anton Blanchard e7f0a7c7ac icache: Don't output X on i_out.insn
decode1 has a lot of logic that uses i_out.insn without first looking at
i_iout.valid. Play it safe and never output X state.

Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
3 years ago
Anton Blanchard f06abb67ad icache: Hook up PMU events
We weren't connecting the icache PMU events up.

Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
3 years ago
Paul Mackerras 49ec80ac3e fetch1/icache1: Remove the use_previous logic
This removes logic that I added some time ago with the thought that it
would enable us to do prefetching in the icache.  This logic detects
when the fetch address is an odd multiple of 4 and the next address in
sequence from the previous cycle.  In that case the instruction we
want is in the output register of the icache RAM already so there is
no need to do another read or any icache tag or TLB lookup.

However, this logic adds complexity, and removing it improves timing,
so this removes it.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Benjamin Herrenschmidt e675eba0df icache: req_laddr becomes req_raddr
Uses real_addr_t and only stores the real address bits

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
3 years ago
Benjamin Herrenschmidt 5cfa65e836 Introduce addr_to_wb() and wb_to_addr() helpers
These convert addresses to/from wishbone addresses, and use them
in parts of the caches, in order to make the code a bit more readable.

Along the way, rename some functions in the caches to make it a bit
clearer what they operate on and fix a bug in the icache STOP_RELOAD state where
the wb address wasn't properly converted.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
3 years ago
Benjamin Herrenschmidt d745995207 Introduce real_addr_t and addr_to_real()
This moves REAL_ADDR_BITS out of the caches and defines a real_addr_t
type for a real address, along with a addr_to_real() conversion helper.

It makes the vhdl a bit more readable

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
3 years ago
Paul Mackerras 9b3b57710a icache: Fix icache invalidation
This fixes two bugs in the flash invalidation of the icache.

The first is that an instruction could get executed twice.  The
i-cache RAM is 2 instructions (64 bits) wide, so one read can supply
results for 2 cycles.  The fetch1 stage tells icache when the address
is equal to the address of the previous cycle plus 4, and in cases
where that is true, bit 2 of the address is 1, and the previous cycle
was a cache hit, we just use the second word of the doubleword read
from the cache RAM.  However, the cache hit/miss logic also continues
to operate, so in the case where the first word hits but the second
word misses (because of an icache invalidation or a snoop occurring in
the first cycle), we supply the instruction from the data previously
read from the icache RAM but also stall fetch1 and start a cache
reload sequence, and subsequently supply the second instruction
again.  This fixes the issue by inhibiting req_is_miss and stall_out
when use_previous is true.

The second bug is that if an icache invalidation occurs while
reloading a line, we continue to reload the line, and make it valid
when the reload finishes, even though some of the data may have been
read before the invalidation occurred.  This adds a new state
STOP_RELOAD which we go to if an invalidation happens while we are in
CLR_TAG or WAIT_ACK state.  In STOP_RELOAD state we don't request any
more reads from memory and wait for the reads we have previously
requested to be acked, and then go to IDLE state.  Data returned is
still written to the icache RAM, but that doesn't matter because the
line is invalid and is never made valid.

Note that we don't have to worry about invalidations due to snooped
writes while reloading a line, because the wishbone arbiter won't
switch to another master once it has started sending our reload
requests to memory.  Thus a store to memory will either happen before
any of our reads have got to memory, or after we have finished the
reload (in which case we will no longer be in WAIT_ACK state).

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Paul Mackerras ca4eb46aea Make wishbone addresses be in units of doublewords or words
This makes the 64-bit wishbone buses have the address expressed in
units of doublewords (64 bits), and similarly for the 32-bit buses the
address is in units of words (32 bits).  This is to comply with the
wishbone spec.  Previously the addresses on the wishbone buses were in
units of bytes regardless of the bus data width, which is not correct
and caused problems with interfacing with externally-generated logic.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Paul Mackerras 54b0e8b8c8 core: Predict not-taken conditional branches using BTC
This adds a bit to the BTC to store whether the corresponding branch
instruction was taken last time it was encountered.  That lets us pass
a not-taken prediction down to decode1, which for backwards direct
branches inhibits it from redirecting fetch to the target of the
branch.  This increases coremark by about 2%.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Paul Mackerras 65c43b488b PMU: Add several more events
This implements most of the architected PMU events.  The ones missing
are mostly the ones that depend on which level of the cache hierarchy
data is fetched from.  The events implemented here, and their raw
event codes, are:

    Floating-point operation completed (100f4)
    Load completed (100fc)
    Store completed (200f0)
    Icache miss (200fc)
    ITLB miss (100f6)
    ITLB miss resolved (400fc)
    Dcache load miss (400f0)
    Dcache load miss resolved (300f8)
    Dcache store miss (300f0)
    DTLB miss (300fc)
    DTLB miss resolved (200f6)
    No instruction available and none being executed (100f8)
    Instruction dispatched (200f2, 300f2, 400f2)
    Taken branch instruction completed (200fa)
    Branch mispredicted (400f6)
    External interrupt taken (200f8)

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years ago
Paul Mackerras 231003f7c7 icache: Snoop writes to memory by other agents
This makes the icache snoop writes to memory in the same way that the
dcache does, thus making DMA cache-coherent for the icache as well as
the dcache.

This also simplifies the logic for the WAIT_ACK state by removing the
stbs_done variable, since is_last_row(r.store_row, r.end_row_ix) can
only be true when stbs_done is true.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras 0fb207be60 fetch1: Implement a simple branch target cache
This implements a cache in fetch1, where each entry stores the address
of a simple branch instruction (b or bc) and the target of the branch.
When fetching sequentially, if the address being fetched matches the
cache entry, then fetching will be redirected to the branch target.
The cache has 1024 entries and is direct-mapped, i.e. indexed by bits
11..2 of the NIA.

The bus from execute1 now carries information about taken and
not-taken simple branches, which fetch1 uses to update the cache.
The cache entry is updated for both taken and not-taken branches, with
the valid bit being set if the branch was taken and cleared if the
branch was not taken.

If fetching is redirected to the branch target then that goes down the
pipe as a predicted-taken branch, and decode1 does not do any static
branch prediction.  If fetching is not redirected, then the next
instruction goes down the pipe as normal and decode1 does its static
branch prediction.

In order to make timing, the lookup of the cache is pipelined, so on
each cycle the cache entry for the current NIA + 8 is read.  This
means that after a redirect (from decode1 or execute1), only the third
and subsequent sequentially-fetched instructions will be able to be
predicted.

This improves the coremark value on the Arty A7-100 from about 180 to
about 190 (more than 5%).

The BTC is optional.  Builds for the Artix 7 35-T part have it off by
default because the extra ~1420 LUTs it takes mean that the design
doesn't fit on the Arty A7-35 board.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras 4b2c23703c core: Implement quadword loads and stores
This implements the lq, stq, lqarx and stqcx. instructions.

These instructions all access two consecutive GPRs; for example the
"lq %r6,0(%r3)" instruction will load the doubleword at the address
in R3 into R7 and the doubleword at address R3 + 8 into R6.  To cope
with having two GPR sources or destinations, the instruction gets
repeated at the decode2 stage, that is, for each lq/stq/lqarx/stqcx.
coming in from decode1, two instructions get sent out to execute1.

For these instructions, the RS or RT register gets modified on one
of the iterations by setting the LSB of the register number.  In LE
mode, the first iteration uses RS|1 or RT|1 and the second iteration
uses RS or RT.  In BE mode, this is done the other way around.  In
order for decode2 to know what endianness is currently in use, we
pass the big_endian flag down from icache through decode1 to decode2.
This is always in sync with what execute1 is using because only rfid
or an interrupt can change MSR[LE], and those operations all cause
a flush and redirect.

There is now an extra column in the decode tables in decode1 to
indicate whether the instruction needs to be repeated.  Decode1 also
enforces the rule that lq with RT = RT and lqarx with RA = RT or
RB = RT are illegal.

Decode2 now passes a 'repeat' flag and a 'second' flag to execute1,
and execute1 passes them on to loadstore1.  The 'repeat' flag is set
for both iterations of a repeated instruction, and 'second' is set
on the second iteration.  Execute1 does not take asynchronous or
trace interrupts on the second iteration of a repeated instruction.

Loadstore1 uses 'next_addr' for the second iteration of a repeated
load/store so that we access the second doubleword of the memory
operand.  Thus loadstore1 accesses the doublewords in increasing
memory order.  For 16-byte loads this means that the first iteration
writes GPR RT|1.  It is possible that RA = RT|1 (this is a legal
but non-preferred form), meaning that if the memory operand was
misaligned, the first iteration would overwrite RA but then the
second iteration might take a page fault, leading to corrupted state.
To avoid that possibility, 16-byte loads in LE mode take an
alignment interrupt if the operand is not 16-byte aligned.  (This
is the case anyway for lqarx, and we enforce it for lq as well.)

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras e41cb01bca fetch1: Fix debug stop
The ability to stop the core using the debug interface has been broken
since commit bb4332b7e6b5 ("Remove fetch2 pipeline stage"), which
removed a statement that cleared the valid bit on instructions when
their stop_mark was 1.

Fix this by clearing r.req coming out of fetch1 when r.stop_mark = 1.
This has the effect of making i_out.valid be 0 from the icache.  We
also fix a bug in icache.vhdl where it was not honouring i_in.req when
use_previous = 1.

It turns out that the logic in fetch1.vhdl to handle stopping and
restarting was not correct, with the effect that stopping the core
would leave NIA pointing to the last instruction executed, not the
next instruction to be executed.  In fact the state machine is
unnecessary and the whole thing can be simplified enormously - we
need to increment NIA whenever stop_in = 0 in the previous cycle.

Fixes: bb4332b7e6b5 ("Remove fetch2 pipeline stage")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Anton Blanchard 605010e33d Fix ghdl warning due to variable shadowing in icache
Fix a couple of ghdl warnings:

icache.vhdl:387:21⚠️ declaration of "i" hides constant "i" [-Whide]
icache.vhdl:400:17⚠️ declaration of "i" hides constant "i" [-Whide]

Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
4 years ago
Paul Mackerras 2e7b371305 core: Implement big-endian mode
Big-endian mode affects both instruction fetches and data accesses.
For instruction fetches, we byte-swap each word read from memory when
writing it into the icache data RAM, and use a tag bit to indicate
whether each cache line contains instructions in BE or LE form.

For data accesses, we simply need to invert the existing byte_reverse
signal in BE mode.  The only thing to be careful of is to get the sign
bit from the correct place when doing a sign-extending load that
crosses two doublewords of memory.

For now, interrupts unconditionally set MSR[LE].  We will need some
sort of interrupt-little-endian bit somewhere, perhaps in LPCR.

This also fixes a debug report statement in fetch1.vhdl.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
4 years ago
Paul Mackerras 893d2bc6a2 core: Don't generate logic for log data when LOG_LENGTH = 0
This adds "if LOG_LENGTH > 0 generate" to the places in the core
where log output data is latched, so that when LOG_LENGTH = 0 we
don't create the logic to collect the data which won't be stored.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 144d8e3c61 icache: Do PLRU update one cycle later
This does the PLRU update based on r.hit_valid and r.hit_way rather
than req_is_hit and req_hit_way, which means there is now a register
between the TLB and cache tag lookup and the PLRU update, which
should help with timing.

As a result, the PLRU victim way selection becomes valid one cycle
later, in the cycle when r.state = CLR_TAG.  So we have to use the
PLRU output directly in the CLR_TAG state and r.store_way in the
WAIT_ACK state.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 419c9a68e8
Merge pull request #206 from Jbalkind/icachecleanup
Icache constants cleanup
5 years ago
Jonathan Balkind d9bda521aa Minor refactor of icache to make less dependent on wishbone
Signed-off-by: Jonathan Balkind <jbalkind@princeton.edu>
5 years ago
Paul Mackerras 62b24a8dae icache: Improve latencies when reloading cache lines
The icache can now detect a hit on a line being refilled from memory,
as we have an array of individual valid bits per row for the line
that is currently being loaded.  This enables the request that
initiated the refill to be satisfied earlier, and also enables
following requests to the same cache line to be satisfied before the
line is completely refilled.  Furthermore, the refill now starts
at the row that is needed.  This should reduce the latency for an
icache miss.

We now get a 'sequential' indication from fetch1, and use that to know
when we can deliver an instruction word using the other half of the
64-bit doubleword that was read last cycle.  This doesn't make much
difference at the moment, but it frees up cycles where we could test
whether the next line is present in the cache so that we could
prefetch it if not.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras b5a7dbb78d core: Remove fetch2 pipeline stage
The fetch2 stage existed primarily to provide a stash buffer for the
output of icache when a stall occurred.  However, we can get the same
effect -- of having the input to decode1 stay unchanged on a stall
cycle -- by using the read enable of the BRAMs in icache, and by
adding logic to keep the outputs unchanged on a clock cycle when
stall_in = 1.  This reduces branch and interrupt latency by one
cycle.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 49a4d9f67a Add core logging
This logs 256 bits of data per cycle to a ring buffer in BRAM.  The
data collected can be read out through 2 new SPRs or through the
debug interface.

The new SPRs are LOG_ADDR (724) and LOG_DATA (725).  LOG_ADDR contains
the buffer write pointer in the upper 32 bits (in units of entries,
i.e. 32 bytes) and the read pointer in the lower 32 bits (in units of
doublewords, i.e. 8 bytes).  Reading LOG_DATA gives the doubleword
from the buffer at the read pointer and increments the read pointer.
Setting bit 31 of LOG_ADDR inhibits the trace log system from writing
to the log buffer, so the contents are stable and can be read.

There are two new debug addresses which function similarly to the
LOG_ADDR and LOG_DATA SPRs.  The log is frozen while either or both of
the LOG_ADDR SPR bit 31 or the debug LOG_ADDR register bit 31 are set.

The buffer defaults to 2048 entries, i.e. 64kB.  The size is set by
the LOG_LENGTH generic on the core_debug module.  Software can
determine the length of the buffer because the length is ORed into the
buffer write pointer in the upper 32 bits of LOG_ADDR.  Hence the
length of the buffer can be calculated as 1 << (31 - clz(LOG_ADDR)).

There is a program to format the log entries in a somewhat readable
fashion in scripts/fmt_log/fmt_log.c.  The log_entry struct in that
file describes the layout of the bits in the log entries.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Benjamin Herrenschmidt d266c9e67d
icache: Latch PLRU victim output (#199)
This stores the output of the PLRU big mux and clears the
tags and valid bits on the next cycle.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt b863791e38
icache: Fix icbi potentially clobbering the icache (#192)
icbi currently just resets the icache. This has some nasty side
effects such as also clearing the TLB, but also the wishbone interface.

That means that any ongoing cycle will be dropped.

However, most of our slaves don't handle that well and will continue
sending acks for already issued requests.

Under some circumstances we can thus restart an icache load and get
spurious ack/data from the wishbone left over from the "cancelled"
sequence.

This has broken booting Linux for me.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt ecaa5e2fb2 dcache: Rework RAM wrapper to synthetize better on Xilinx
The global wr_en signal is causing Vivado to generate two TDP (True Dual Port)
block RAMs instead of one SDP (Simple Dual Port) for each cache way. Remove
it and instead apply a AND to the individual byte write enables.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Paul Mackerras c164a2f4ea Merge branch 'mmu'
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras a658766fcf Implement slbia as a dTLB/iTLB flush
Slbia (with IH=7) is used in the Linux kernel to flush the ERATs
(our iTLB/dTLB), so make it do that.

This moves the logic to work out whether to flush a single entry
or the whole TLB from dcache and icache into mmu.  We now invalidate
all dTLB and iTLB entries when the AP (actual pagesize) field of
RB is non-zero on a tlbie[l], as well as when IS is non-zero.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Paul Mackerras 3d4712ad43 Add TLB to icache
This adds a direct-mapped TLB to the icache, with 64 entries by default.
Execute1 now sends a "virt_mode" signal from MSR[IR] to fetch1 along
with redirects to indicate whether instruction addresses should be
translated through the TLB, and fetch1 sends that on to icache.
Similarly a "priv_mode" signal is sent to indicate the privilege
mode for instruction fetches.  This means that changes to MSR[IR]
or MSR[PR] don't take effect until the next redirect, meaning an
isync, rfid, branch, etc.

The icache uses a hash of the effective address (i.e. next instruction
address) to index the TLB.  The hash is an XOR of three fields of the
address; with a 64-entry TLB, the fields are bits 12--17, 18--23 and
24--29 of the address.  TLB invalidations simply invalidate the
indexed TLB entry without checking the contents.

If the icache detects a TLB miss with virt_mode=1, it will send a
fetch_failed indication through fetch2 to decode1, which will turn it
into a special OP_FETCH_FAILED opcode with unit=LDST.  That will get
sent down to loadstore1 which will currently just raise a Instruction
Storage Interrupt (0x400) exception.

One bit in the PTE obtained from the TLB is used to check whether an
instruction access is allowed -- the privilege bit (bit 3).  If bit 3
is 1 and priv_mode=0, then a fetch_failed indication is sent down to
fetch2 and to decode1, which generates an OP_FETCH_FAILED.  Any PTEs
with PTE bit 0 (EAA[3]) clear or bit 8 (R) clear should not be put
into the iTLB since such PTEs would not allow execution by any
context.

Tlbie operations get sent from mmu to icache over a new connection.

Unfortunately the privileged instruction tests are broken for now.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years ago
Benjamin Herrenschmidt 31b55b2a75 core: Improve core reset
The icache would still spit out an instruction which could
cause a 0x700 instead of a reset.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Anton Blanchard dcee60a729 Fix a ghdlsynth issue in icache
ghdlsynth doesn't like the debug statement, so wrap it in a generate.

Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
5 years ago
Benjamin Herrenschmidt 9a63c098a5 Move log2/ispow2 to a utils package
(Out of icache and dcache)


Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 3df018cdc0 icache: Add wishbone pipelining support
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 1a63c39704 Make it possible to change wishbone address size
All that needs to be changed now is the size in wishbone_types.vhdl
and the address decoder in soc.vhdl

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago
Benjamin Herrenschmidt 6e0ee0b0db icache & dcache: Fix store way variable
We used the variable "way" in the wrong state in the cache when
updating a line valid bit after the end of the wishbone transactions,
we need to use the latched "store_way".

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
5 years ago