Use a more generic console_init() instead of potato_uart_init(),
and do the same for interrupt control. There should be no
change in behaviour.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This makes the ICS support less than the 8 architected bits
and sets the soc to use 3 bits by default.
All the supported bits set translates to "masked" (and will read
back at 0xff), any small value is used as-is.
Linux doesn't use priorities above 5, so this is a way to save
silicon. The number of supported priority bits is exposed to the
OS via the config register.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Move the external interrupt generation to a separate module
"ICS" (source controller) which a register per source containing
currently only the priority control.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
That's how Linux expects it. This also simplifies the
register access implementation since the bit fields now
align properly regardless of the access size.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Currently the test writes to the XICS and then checks that the
expected interrupt has happened. This turns into a stbcix
instruction followed immediately by a load from the variable that
indicates whether an interrupt has happened. It is possible for
it to take a few cycles for the store to reach the XICS and the
interrupt request signal to come back to the core, particularly
with improvements to the load/store unit and dcache.
This therefore adds a delay between storing to the XICS and
checking for the occurrence of an interrupt, so as to give the
signals time to propagate. The delay loop does an arbitrary 10
iterations, and each iteration does two loads and one store to
(cacheable) memory.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This changes the SoC interconnect such that the main 64-bit wishbone out
of the processor is first split between only 3 slaves (BRAM, DRAM and a
general "IO" bus) instead of all the slaves in the SoC.
The IO bus leg is then latched and down-converted to 32 bits data width,
before going through a second address decoder for the various IO devices.
This significantly reduces routing and timing pressure on the main bus,
allowing to get rid of frequent timing violations when synthetizing on
small'ish FPGAs such as the Artix-7 35T found on the original Arty board.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
It makes things a bit more standard and a bit nicer to read
without all those strlen(). Also console.c takes care of adding
the carriage returns before the linefeeds.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Checks interrupt masking and priorities.
Adds to `make test_xics` which is run in `make check` also.
Signed-off-by: Michael Neuling <mikey@neuling.org>