diff --git a/enable_capi_snap/bk_main.xml b/enable_capi_snap/bk_main.xml index 2655e43..5768618 100644 --- a/enable_capi_snap/bk_main.xml +++ b/enable_capi_snap/bk_main.xml @@ -40,9 +40,9 @@ - Acceleration Workgroup + System Software Workgroup - aclwg-chair@openpowerfoundation.org + ???wg-chair@openpowerfoundation.org OpenPower Foundation @@ -68,11 +68,11 @@ Work Group name, and Work Product track (both in second paragraph). --> - The purpose of this document is to describe how to enable a new customer card on CAPI SNAP framework. SNAP is a open-sourced programming framework for FPGA Acclerations. Its homepage is https://github.com/open-power/snap. With it, you can develop accelerations with Power and CAPI technology easily. + The purpose of this document is to describe how to enable a new customer card to support CAPI SNAP framework. SNAP is a open-source programming framework for FPGA Accelerations. Its homepage is https://github.com/open-power/snap. With it, you can develop accelerators with CAPI technology easily. - This document describes the flow and steps to enable a new PCIe FPGA card to have or CAPI2.0 capability. Firstly, please check whether your PCIe FPGA card is listed on today's "SNAP enabled cards" (On the homepage README of SNAP Github), if not, this document will guide you on how to enable it. Since all of the project files are open-sourced, you can create a Github repository fork, and create a new board support package (BSP) and walk through the entire working flow to enable SNAP. + This document describes the flow and steps to enable a new PCIe FPGA card to be able to run in CAPI2.0 mode, and to support SNAP framework. If your PCIe FPGA card is not listed on today's available "SNAP enabled cards" (On the homepage README of SNAP Github), this document will guide you on how to enable it. Since all of the project files are open-source, you can create a Github repository fork, and create a new board support package (BSP) and walk through the working flow to enable SNAP. - This document is a Workgroup Note owned by the Acceleration Workgroup and handled in compliance with the requirements outlined in the + This document is a Workgroup Note owned by the System Software Workgroup and handled in compliance with the requirements outlined in the OpenPOWER Foundation Work Group (WG) Process document. It was created using the Master Template Guide version &template_version;. Comments, questions, etc. can be submitted to the diff --git a/enable_capi_snap/ch_capi20_bsp.xml b/enable_capi_snap/ch_capi20_bsp.xml index 0da338d..1d49891 100644 --- a/enable_capi_snap/ch_capi20_bsp.xml +++ b/enable_capi_snap/ch_capi20_bsp.xml @@ -18,11 +18,11 @@ xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="chapter_capi20_bsp"> -Enable CAPI2.0 BSP -
Diagram +Enable CAPI2.0 BSP +
Structure Each card supplier may design their FPGA board with different FPGA chips, circuit components, memory and IOs, so BSP (Board support package) is different from card to card. That's why an open-sourced project is so helpful: It allows card supplier and every developer to explore the functions of the card freely, and get benefits from CAPI technology.
- CAPI2.0: capi2-bsp (capi_bsp_wrap.xcix) + Project hierarchy for HDK mode @@ -47,18 +47,17 @@ At least Flash interface pins and PCIe interface pins are required to be assigned in xdc files precisely. Between capi_bsp_wrap and user AFU (psl_accel), there are 6 groups of signals: Command, DMA, Buffer, Response, MMIO and Control. Please refer to CAPI2.0 PSL/AFU interface Spec for the details. - The logic in CAPI2.0 snap_core implemented the data path with DMA interface. Buffer interface is not used. -
+
Step-by-step guidance
Work on github capi2-bsp is a public Github repository. You need to have a Github account first. Then create a "fork" (Click the "fork" button) on https://github.com/open-power/capi2-bsp. git clone https://github.com/[YOUR_USERNAME]/capi2-bsp - Actually capi2-bsp is a submodule of snap. That will be introduced later. - Keep working on your own capi2-bsp fork, when it has been validated to work well, submit a pull request to "open-power/capi2-bsp" and require merging into the public upstream. + capi2-bsp is also a submodule of snap. + Keep working on your own capi2-bsp fork, when it has been validated to work well, submit a pull request to "open-power/capi2-bsp" and request merging into the public upstream.
Preparations - First, define a FPGACARD name. It can start from the company's name, followed with the card name and be short. For example. ADKU3 = Alpha-Data ADM-PCIE-KU3. Get information from the card supplier. + First, define a FPGACARD name. It can start from the company's name, following with the card name and be short. For example, ADKU3 = Alpha-Data ADM-PCIE-KU3. Get information from the card supplier. Information to collect @@ -117,50 +116,71 @@ Make sure the information in xdc/tcl files are permitted to be open-source. There are some other modifications you should pay attention to: - Send email to OpenPower Acceleration Workgroup or contact your representative to apply for a subsystem device ID for the new card. For example, ADKU3 uses 0x0605. S241 uses 0x0660. The information needs to be filled in "[FPGACARD]/tcl/create_ip.tcl", CONFIG.PF0_SUBSYSTEM_ID - - As a CAPI device, you need to make sure PF0 (physical function) has PF0_DEVICE_ID=0477 and PF0_SUBSYSTEM_VENDOR_ID=1014. This is required by the linux kernel module cxl (pci.c), otherwise the card will not be recognized as a CAPI card by system. + PCIe core IP creation: + + "Vendor ID" and "Device ID" have to be 0x1014 and 0x0477, so kernel module cxl can recognize the card as a CAPI device. (See in pci.c) + If the card vendor has a code allocated by PCISIG (See in PCISIG Member companies), use it as "Subsystem Vendor ID". "Subsystem Device ID" can be chosen freely. + If the card vendor doesn't have a code allocated by PCISIG, or just for testing and evaluation purpose, please use default "Subsystem Vendor ID" = 0x1014, and send email to aclwg-chair@openpowerfoundation.org to get a distinct "Subsystem Device ID" to differentiate this card from others. + Example: (in create_ip.tcl) + create_ip -name pcie4_uscale_plus -vendor xilinx.com -library ip -module_name pcie4_uscale_plus_0 -dir $ip_dir >> $log_file +set_property -dict [list \ + CONFIG.PF0_CLASS_CODE {1200ff} \ + CONFIG.PF0_REVISION_ID {02} \ + CONFIG.VENDOR_ID {1014} \ + CONFIG.PF0_DEVICE_ID {0477} \ + CONFIG.PF0_SUBSYSTEM_VENDOR_ID {1014} \ + CONFIG.PF0_SUBSYSTEM_ID {0661} \ + ...... \ + ...... \ + ] [get_ips pcie4_uscale_plus_0] >> $log_file + The corresponding "Subsytem Vendor ID" and "Subsystem Device ID" need to be added into capi-utils, file "psl-devices". + + - - If you are using Xilinx VU33P or VU37P who have HBM, this is actually a new FPGA family "virtexuplushbm". Or if you are using other new FPGA Production family, additional steps need to take: + Product Family support: + If the FPGA chip types are Xilinx VU33P or VU37P who have HBM, this is actually a new FPGA family virtexuplushbm. For a new FPGA Production family, additional steps need to take: - capi2-bsp/psl/create_ip.tcl: "set_property supported_families ...", add new family name like "virtexuplushbm Production" - capi2-bsp/common/tcl/create_capi_bsp.tcl: "set_property supported_families ...", do the same as above. - Add family support to PSL9 ZIP package: unzip the package, do the hacking, and zip them back again. - unzip ibm.com_CAPI_PSL9_WRAP_2.00.zip + "capi2-bsp/psl/create_ip.tcl": set_property supported_families ..., add new family name like "virtexuplushbm Production" + "capi2-bsp/common/tcl/create_capi_bsp.tcl": set_property supported_families ..., do the same as above. + Add family support to PSL9 ZIP package: unzip the package, do the modifications, and zip them back again. Commands: + $ unzip ibm.com_CAPI_PSL9_WRAP_2.00.zip (modify compnent.xml to add new family name, search "supportedFamilies") -zip -r ibm.com_CAPI_PSL9_WRAP_2.00.zip component.xml src/ xgui/ -rm -fr component.xml src/ xgui/ +$ zip -r ibm.com_CAPI_PSL9_WRAP_2.00.zip component.xml src/ xgui/ +$ rm -fr component.xml src/ xgui/ - VSEC starting address: VSEC (Vendor Specific Extended Capability Structure) is a part of PCIe capability list architecture. It needs to be properly linked in PCIe config space. File "capi2-bsp/[FPGACARD]/src/capi_vsec.vhdl, vsec_addr[21:32] defines the address for VSEC. It should be matched with PCIe core value PF0_SECONDARY_PCIE_CAP_NEXTPTR. Take card U200 for example, its vsec_addr[21:32] starts from 12'h400 (12'b0100_0000_0000), and there is a patch_ip.tcl to modify the pcie4 core's default value 12'h480 to 12'h400. (Actually this is a historical workaround, because Xilinx had modified its extended capability starting address from 12'h400 to 12'h480 on one version. The patch is "tcl/patch_ip.tcl" + VSEC starting address: + VSEC (Vendor Specific Extended Capability Structure) is a part of PCIe capability list architecture. It needs to be properly linked in PCIe config space. "capi2-bsp/[FPGACARD]/src/capi_vsec.vhdl": vsec_addr[21:32] defines the address for VSEC. It should be matched with PCIe core value PF0_SECONDARY_PCIE_CAP_NEXTPTR. Take card U200 for example, its vsec_addr[21:32] starts from 12'h400 (12'b0100_0000_0000), and "tcl/patch_ip.tcl" modifies it from default value 12'h480 to 12'h400." exec /bin/bash -c "sed -i \"s/PF0_SECONDARY_PCIE_CAP_NEXTPTR=0x480/PF0_SECONDARY_PCIE_CAP_NEXTPTR=0x400/\" $pcie_source" exec /bin/bash -c "sed -i \"s/PF0_SECONDARY_PCIE_CAP_NEXTPTR('H480)/PF0_SECONDARY_PCIE_CAP_NEXTPTR('H400)/\" $pcie_source" - About Xilinx PCIe code information for extended configuration space, you can find it on PG156 (for Ultrascale Device) or PG213 (for Ultrascale+ Device). - For Ultrascale+ HBM device, pcie4c core, the VSEC starts from 12'hE80. At this time vsec_addr must be changed in capi_vsec.vhdl. And the above two lines in patch_ip.tcl should be disabled. + Xilinx PCIe code information for extended configuration space can be found on PG156 (for Ultrascale Device) or PG213 (for Ultrascale+ Device). + For Ultrascale+ HBM device's pcie4c core, the VSEC starts from 12'hE80. At this time vsec_addr[21:32] must be changed in "capi_vsec.vhdl". And the above two lines in "patch_ip.tcl" are not needed anymore. - Vital Product Data: Source files under "capi2-bsp/[FPGACARD]/src": capi_vsec.vhdl. This step is optional. Edit the hardcoded "vpd44data_be" to add VPD (Vital Product Data) information. Ideally this information should be read from an I2C EEPROM. The FPGA supplier wrote the content of EEPROM before shipping. However, today we take the simpliest way to write some hard coded value. "capi2-bsp/common/script" has a script "gen_vsec.sh" to help you do this. + Vital Product Data: This step is optional. + "capi2-bsp/[FPGACARD]/src/capi_vsec.vhdl": Edit the hardcoded vpd44data_be to add VPD (Vital Product Data) information. Ideally this information should be read from an I2C EEPROM. The FPGA supplier wrote the content of EEPROM before shipping. However, today we take the simpliest way to write some hard coded value. "capi2-bsp/common/script" has a script "gen_vsec.sh" to do this. - User Image Address: Source files under "capi2-bsp/[FPGACARD]/src": capi_xilmltbt.vhdl. Edit the User image starting address "wbstart_addr". + User Image Address: + "capi2-bsp/[FPGACARD]/src/capi_xilmltbt.vhdl": Edit the User image starting address wbstart_addr. wbstart_addr <= "User_image_address" when (cpld_user_bs_req = '1') else "00000000000000000000000000000000"; - capi_xilmltbt.vhdl has a Xilinx multi-boot core. That means you can create two kinds of images: Factory image and User image. Factory images will be placed at address 0 of FPGA Flash, and User image will be placed at "User_image_address" on the flash. When power-on or the FPGA card is reset, the multiboot core knows where to load the image. Usually we put a Golden factory image on address 0 and never change it, and multiboot core always tries to load user image first. If the user image has something wrong, multiboot logic will tell the FPGA to "fallback" to factory image. You still see the card in the system and you can just program a new user image to try again. + "capi_xilmltbt.vhdl" has a Xilinx multi-boot core. That means you can create two kinds of images: Factory image and User image. Factory images will be placed at address 0 of FPGA Flash, and User image will be placed at "User_image_address" on the flash. When power-on or the FPGA card is reset, the multiboot core knows where to load the image. Usually we put a Golden factory image on address 0 and never change it, and multiboot core always tries to load user image first. If the user image has something wrong, multiboot logic will tell the FPGA to "fallback" to factory image. You still see the card in the system and you can just program a new user image to try again. - Check Vivado Version. Make sure this version of Vivado tool supports the FPGA part name you have assigned in "capi2-bsp/[FPGACARD]/Makefile". For some very new FPGA chip types, in one Vivado version they may have a suffix of "es" (engineering sample), and in a newer Vivado version the "es" suffix is removed. + Check Vivado Version: + Make sure this version of Vivado tool supports the FPGA part name you have assigned in "capi2-bsp/[FPGACARD]/Makefile". For some very new FPGA chip types, in one Vivado version they may have a suffix of "es" (engineering sample), and in a newer Vivado version the "es" suffix is removed.
Generate capi_bsp_wrap cd capi2-bsp make [FPGACARD] - If it is successfully done, the generation of BSP for CAPI2.0 is completed. For HDK developers, they can create their own Vivado project and import capi_bsp_wrap as an IP. But for SNAP developers, there are some other work to do, see in next chapter. + If it is successfully done, the generation of BSP for CAPI2.0 is completed. Developers using HDK mode can create their own Vivado project and import capi_bsp_wrap as an IP. But for SNAP developers there are some other work to do, see in next chapter.
diff --git a/enable_capi_snap/ch_capi20_snap.xml b/enable_capi_snap/ch_capi20_snap.xml index f5a97ae..7fcc14c 100644 --- a/enable_capi_snap/ch_capi20_snap.xml +++ b/enable_capi_snap/ch_capi20_snap.xml @@ -18,7 +18,7 @@ xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="chapter_capi20_snap"> -Enable CAPI2.0 SNAP +Enable CAPI2.0 SNAP
Work on github Snap is also a public Github repository. Create a "fork" (Click the "fork" button) on https://github.com/open-power/snap. Keep working on your own snap fork, when it works, submit a pull request to "open-power/snap" and require merging into the public upstream. @@ -34,14 +34,16 @@ git submodule update
SNAP structure On the FPGA side, there are three parts that need to consider when moving to a new FPGA card. They are (a) BSP, (b) snap_core, (c) DDR memory controller (mig). And there are also some components in SNAP need to be updated for a new FPGA card.
- Design hierarchy + Project hierarchy for SNAP -
- SNAP also includes the software part. The following picture shows the SNAP github repository folders and files: + + Module snap_core on CAPI2.0 implemented the data path with DMA interface. Buffer interface is not used. + + The following picture shows the SNAP github repository folders and files.
Repository structure @@ -51,14 +53,14 @@ git submodule update
All of the user-developed accelerators are in "actions" directory. There are already some examples there. Each "action" has its "sw", "hw", "tests", and other sub-directories. The hardware part uses "action_wrapper" as its top. - Then back to ${SNAP_ROOT}, "software" directory includes libsnap, header files and some tools. "hardware" directory is the main focus. deconfig has the config files for silent testing purpose, and scripts has the menu settings and other scripts. + "software" directory includes libsnap, header files and some tools. "hardware" directory is the main focus. "deconfig" has the config files for silent testing purpose, and "scripts" has the menu settings and other scripts. How does SNAP work and what are the files used in each step? - make snap_config: The menu to select cards and other options is controlled by "script/Kconfig" + make snap_config: The menu to select cards and other options is controlled by "script/Kconfig" - make model: This step creates a Vivado project. It firstly calls "hardware/setup/create_snap_ip.tcl" to generate the IP files in use, then calls "hardware/setup/create_framework.tcl" to build the project. About create_framework.tcl: + make model: This step creates a Vivado project. It firstly calls "hardware/setup/create_snap_ip.tcl" to generate the IP files in use, then calls "hardware/setup/create_framework.tcl" to build the project. About "create_framework.tcl": It adds BSP (board support package). In CAPI1.0, it is also called PSL Checkpoint file (b_route_design.dcp) or base_image. It uses the path pointed to b_route_design.dcp and adds it into the design. In CAPI2.0, it will call the make process in capi2-bsp submodule to generate "capi_bsp_wrap" if it doesn't exist. If you have already successfully generated it, this step is skipped. Then "create_framework.tcl" adds the capi_bsp_wrap (xcix or xci file) into the design. @@ -67,19 +69,19 @@ git submodule update It adds FPGA top files and snap_core files (in hardware/hdl/core). - It adds constrain files: in hardware/setup/${FPGACARD} or in hardware/capi2-bsp/${FPGACARD} + It adds constrain files: in "hardware/setup/[FPGACARD]" or in "hardware/capi2-bsp/[FPGACARD]" - It adds user files (in actions/${ACTION_NAME}/hw). User's action hardware uses top file named "action_wrapper.vhd" + It adds user files (in "actions/[ACTION_NAME]/hw"). User's action hardware uses top file named "action_wrapper.vhd" - It adds simulation files (in hardware/sim/core) including simulation top files and simulation models. (If "no_sim" is selected in snap_config menu, this step is skipped.) + It adds simulation files (in "hardware/sim/core") including simulation top files and simulation models. (If no_sim is selected in snap_config menu, this step is skipped.) - After above steps, "viv_project" is created. You can open it with Vivado GUI, and check the design hierarchy. And it will call the selected simulator to compile the simulation model. + After above steps, "hardware/viv_project" is created. You can open it with Vivado GUI, and check the design hierarchy. And it will call the selected simulator to compile the simulation model. - make image: This step runs synthesis, implementation and bitstream generation. It calls "hardware/setup/snap_build.tcl" and also uses some related tcl scripts to work on "viv_project". In this step, "hardware/build" will be created and the output products like bit images, checkpoints (middle products for debugging) and reports (reports of timing, clock, IO, utilization, etc.) If everything runs well and timing passes, user will get the bitstream files (in "build/Images" sub directory) to program the FPGA card. + make image: This step runs synthesis, implementation and bitstream generation. It calls "hardware/setup/snap_build.tcl" and also uses some related tcl scripts to work together. In this step, "hardware/build" will be created and the output products like bit images, checkpoints (middle products for debugging) and reports (reports of timing, clock, IO, utilization, etc.) If everything runs well and timing passes, user will get the bitstream files (in "build/Images" sub directory) to program the FPGA card.
@@ -96,14 +98,14 @@ git submodule update If you meet files ending with "_source", like "psl_fpga.vhd_source", that means this file will be pre-processed to generate the output file without "_source" suffix, like "psl_fpga.vhd". There are #ifdef macros or comments like -- only for NVME_USED=TRUE. They help to create a target VHDL/Verilog file with different configurations. - Below lists the files to change. There may be some differences with new commits in SNAP git repository. Keep in mind they include: + Below lists the files to change: snap_config and environmental files Hardware: psl_accel and psl_fpga (top) RTL files Hardware: tcl files for the workflow - Hardware: Board: xdc files for IO/floorplan/clock/bitstream - Hardware: DDR: create DDR Memory controller IP (mig) in create_snap_ip.tcl, create DDR memory sim model, and other xdc files - Hardware: Other IP: create_ip, sim model, xdc files + Hardware: xdc files for IO, floorplan, clock and bitstream settings + Hardware: create DDR Memory controller IP (mig) in create_snap_ip.tcl, create DDR memory sim model, and other xdc files + Hardware: create_ip, sim model and xdc files for other IPs Software: New card type, register definition Testing: jenkins Readme and Documents @@ -244,8 +246,8 @@ git submodule update capi-utils is the third git repository that needs a few modifications. Same as before, fork it, make the modifications and submit a pull request. git clone https://github.com/[YOUR_USERNAME]/capi-utils There is only one file to be modified: "psl-devices". Add a new line, for example - 0x0665 U200 Xilinx 0x1002000 64 SPIx4 - The first column is the SUBSYSTEM_ID, the second column is the Card name, the third is the FPGA Chip Vendor, then it is the User Image starting address on the flash. For SPI device, size of block is 64Bytes. "SPIx4" is the flash interface type. It may also be "DPIx16" or "SPIx8". + 0x1014 0x0665 U200 Xilinx 0x1002000 64 SPIx4 + It lists the Subsystem Vendor ID, Subsystem Device ID, Card name, FPGA chip, then it is the "User_image_address" on the flash. For SPI device, size of block is 64Bytes. "SPIx4" is the flash interface type. It may also be "DPIx16" or "SPIx8". "SPIx8" uses two bitstreams so another starting address also needs to be provided. And when you call "capi-flash-script" to program the flash, it needs two input bitstream files (primary and secondary).
@@ -258,15 +260,15 @@ git submodule updateGenerate capi_bsp_wrap in capi2-bsp.Make modifications to snap git repository as described above.Select an action example without DDR, for example: hls_helloworld. - Go through the "make model" and "make image" processes and build the bitstream files. + Go through the make model and make image processes and build the bitstream files. Plug the card onto Power9 server and connect a JTAG/USB cable to a laptop. Install Vivado Lab on this laptop (it requires Windows or Linux operating system). Start Vivado Lab tool, open Hardware manager.Power on the server. You will see the FPGA target is recognized by Vivado Lab tool. - Program the generated bitstream files (bin or mcs) to the card. On Vivado Lab tool, select the FPGA chip and right-click, choose "Add Configuration Memory Device..." and program the bin/mcs files to the flash. See in picture and + Program the generated bitstream files (bin or mcs) to the card. On Vivado Lab tool, select the FPGA chip and right-click, choose "Add Configuration Memory Device..." and program the bin or mcs files to the flash. See in picture and Wait it done (It may take 10 minutes). Unplug the JTAG/USB cable, reboot the server.After the server is booted, log into OS, run lspci to see if the card is there. (Usually with Device ID 0x0477). Then download snap, capi-utils, libcxl (from github). Go to snap directory, make apps and run the application. - There is another way to replace step 6 to 8. We call it "Fast program bit-file when power on". Prepare the bit file on laptop in advance. Not like bin/mcs files which are for the flash, the bit file is used to program the FPGA chip directly. When the server is powered on, after Vivado Lad sees the FPGA, right click the device, program device ... and select the bit file immediately. This action only takes about 10 seconds and can be done before hostboot on the server starts to scan PCIe devices. - You should be aware of the fact that because only FPGA chip is programmed, (the flash memory is empty), when the server is powered off or reboot, FPGA doesn't have electricity so the programming in FPGA chip will be lost. + There is another way to replace step 6 to 8. We call it "Fast program bit-file when power on". Prepare the bit file on laptop in advance. Not like bin/mcs files which are for the flash, the bit file is used to program the FPGA chip directly. When the server is powered on, after Vivado Lab sees the FPGA, right click the device, "program device..." and select the bit file immediately. This action only takes about 10 seconds and can be done before skiboot on the server starts to scan PCIe devices. + Be aware of the fact that now only FPGA chip is programmed, (the flash memory is still empty or holding old data), so when the server is powered off or reboot the recent programming to FPGA chip will be lost.
@@ -287,31 +289,31 @@ git submodule update
- When you download and install Vivado Lab, please pick up as same version as the Vivado (SDx) that you are using to build images. + When you download and install Vivado Lab, please choose as same version as the Vivado tool that you were using to build images. - Tips to help you debug: + Tips to help you debug: - Seeing 0477 by "lspci" is the most important milestone. If not, please check file "/sys/firmware/opal/msglog" to see whether there are link training failed messages. A successful message looks like this, which means this PCIe device has been scanned and recognized. The number followed "PHB#" is the PCIe device identifier in the format of "domain:bus:slot.func". You can see it by "lspci" also.: + Seeing 0477 by lspci is the most important milestone. If not, please check file "/sys/firmware/opal/msglog" to see whether there are link training failed messages. A successful message looks like this, which means this PCIe device has been scanned and recognized. The number followed "PHB#" is the PCIe device identifier in the format of "domain:bus:slot.func": [ 63.403485191,5] PHB#0000:00:00.0 [ROOT] 1014 04c1 R:00 C:060400 B:01..01 SLOT=CPU1 Slot2 (16x) [ 63.403572553,5] PHB#0000:01:00.0 [EP ] 1014 0477 R:02 C:1200ff ( device) LOC_CODE=CPU1 Slot2 (16x) - Check dmesg. Run "dmesg > dmesg.log" and search "cxl" in dmesg.log file. A normal output should be look like this + Check dmesg. Run "dmesg > dmesg.log" and search "cxl" in "dmesg.log" file. A normal output should be look like this: [ 9.301403] cxl-pci 0000:01:00.0: Device uses a PSL9 [ 9.301523] cxl-pci 0000:01:00.0: enabling device (0140 -> 0142) [ 9.303327] cxl-pci 0000:01:00.0: PCI host bridge to bus 0006:00 [ 9.306749] cxl afu0.0: Activating AFU directed mode - Today most of the linux kernel versions already include cxl module. If you doubt this, please check by + Today most of the linux kernel versions already include cxl module. You can doublecheck it by: modinfo cxl - Check create_ip.tcl in capi2-bsp/[FPGACARD]/tcl and check the configuration of PCIe core. - If your PCIe device has been recognized as CAPI, do "ls /dev/cxl" and you can see "afu*" devices. Then your application software can open the device like operating an ordinary file. + + If your PCIe device has been recognized as CAPI, ls /dev/cxl and you can see "afu*" devices. Then your application software can open the device like operating an ordinary file. ls /dev/cxl afu0.0m afu0.0s Some other useful commands to check PCIe config (with the right PCIe identifier "domain:bus:slot.func") sudo lspci -s 0000:01:00.0 -vvv - For example, you can check the settings coded in Xilinx PCIe core, like SUBSYSTEM_ID: + For example, you can check the settings coded in Xilinx PCIe core, like Subsystem Device ID: 0000:01:00.0 Processing accelerators: IBM Device 0477 (rev 02) (prog-if ff) Subsystem: IBM Device 0660 Link Speed @@ -333,10 +335,10 @@ Kernel driver in use: cxl-pci Kernel modules: cxl - If nothing shows by "ls /dec/cxl", you should check PCIe config space by + If nothing shows by ls /dev/cxl, you should check PCIe config space: sudo hexdump /sys/bus/pci/devices/0000\:00\:00.1/config - Please change the PCIe device identifier (0000:00:00.1) accordingly. Make sure you have seen the VSEC is properly linked. If not, go back to check your VSEC address in capi_vsec.vhdl in last chapter. + Please pick up the PCIe device identifier (0000:00:00.1) you want to check. Make sure the VSEC is properly linked. If not, go back to check "capi_vsec.vhdl". 0000000 1014 0477 0146 0010 ff02 1200 0000 0000 0000010 000c 0000 2200 0006 000c 1000 2200 0006 0000020 000c 0000 0000 0002 0000 0000 1014 0668 @@ -362,7 +364,7 @@ Kernel modules: cxl 0000200 0000 0000 00ff 8000 0000 0000 0000 0000 0000210 0000 0000 0000 0000 0000 0000 0000 0000 * -0000e80 000b 0001 1280 0800 0801 0021 0006 0200 --> VSEC; e80 matches here +0000e80 000b 0001 1280 0800 0801 0021 0006 0200 --> VSEC starts from e80 (or 400) 0000e90 0000 b000 0000 0000 0000 0000 0000 0000 0000ea0 0100 0000 0040 0000 0200 0000 0400 0000 0000eb0 0000 0000 0000 0000 0000 0000 0000 0000 @@ -378,18 +380,25 @@ Kernel modules: cxl
Stage 2: Verify Flash interface Use capi-utils to program the bitstream files. If it succeeds, it proves that the Flash interface has been configured correctly. After this step, you can get rid of JTAG connector and use "capi-flash-script" to program the FPGA bitstreams. The mechanic behind "capi-flash-script" is: - There is a flash controller on FPGA (in capi_bsp_wrap), and it connects to PCIe config space. The flash controller exposes four VSEC registers to allow host system to control. They are "Flash Address Register", "Flash Size Register", "Flash Status/Control Register" and "Flash Data Port". See in Coherent Accelerator Interface Architecture, Chapter 12.3, "CAIA Vendor-Specific Extended Capability Structure". So capi-utils src C file reads FPGA bitstream "bin" file, and writes the bytes to VSEC "Flash Data Port" register. So the bytes are sent to PCIe, through Flash controller and finally arrive to flash memory on the card. + There is a flash controller on FPGA (in capi_bsp_wrap), and it connects to PCIe config space. The flash controller exposes four VSEC registers to allow host system to control. They are: + + Flash Address Register + Flash Size Register + Flash Status/Control Register + Flash Data Port + + The details are decribed in Coherent Accelerator Interface Architecture, Chapter 12.3, "CAIA Vendor-Specific Extended Capability Structure". So the C file in capi-utils reads FPGA bitstream "bin" file, and writes the data to VSEC "Flash Data Port" register. So the bytes are sent through PCIe, to Flash controller and finally arrive to flash memory on the card.
Stage 3: Verify DDR interface Select another action example (hdl_example with DDR) or hls_memcopy. - "make model" and "make sim". Make sure the DDR simulation model works well. - "make image" to generate the bitstream files. + make model and make sim. Make sure the DDR simulation model works well. + make image to generate the bitstream files. Use capi-utils to program the bitstream "bin" file to the card. Run the application to see whether it works. - Basically SNAP only implemented 1 DDR Bank (or channel) while most cards have 2 to 4 banks. (N250S+ is one of the rare card which has only 1 DDR bank). The main reason was that depending on user's needs, there are two options: the first is to just extend the size of the first bank by adding this 2nd bank on the same DDR memory controller. The other option is to use 2 (or more) memory controllers in parallel to have a higher throughput. This later option means that you will need to duplicate the DDR memory controller in place and this will take twice the place in the design. In this case, the action_wrapper also requires change to add the additional DDR ports. For HLS design, another HLS DDR port should be added into "actions/[YOUR_ACTION]/hw/XXX.CPP". As for an opensource project, everyone is welcomed to add your contribution by implementing it and add it to the SNAP design. + Basically SNAP only implemented one DDR Bank (or channel) while most cards have two to four banks. (N250S+ is one of the rare card which has only one DDR bank). To implement more DDR channels, depending on user's needs, there are two options: the first is to just extend the size of the first bank by adding this second bank on the same DDR memory controller. The other option is to use two (or more) memory controllers in parallel to have a higher throughput. This later option means that you will need to duplicate the DDR memory controller in place and this will take twice the place in the design. In this case, the action_wrapper also requires change to add the additional DDR ports. For HLS design, another HLS DDR port should be added into "actions/[YOUR_ACTION]/hw/XXX.CPP". As for an opensource project, everyone is welcomed to add your contribution by implementing it and add it to the SNAP design.
@@ -410,9 +419,9 @@ Kernel modules: cxl
Cleanup and submit Now a new FPGA card has been enabled to CAPI2.0 SNAP. Cleanup your workspace, check files and submit your work! - capi-utils is independent. Just create a pull request and assign a reviewer. It can only been merged into master branch after having been reviewed. Submit the pull request of your "capi2-bsp fork" before "snap fork". Assign the reviewer and wait capi2-bsp to be merged into https://github.com/open-power/capi2-bsp master branch Update the submodule pointer to the latest "open-power/capi2-bsp" master and then submit the pull request of your forked snap. + Capi-utils is independent. Just create a pull request and assign a reviewer. It can only been merged into master branch after having been reviewed.
diff --git a/enable_capi_snap/ch_introduction.xml b/enable_capi_snap/ch_introduction.xml index 3e15cff..5ebfaf2 100644 --- a/enable_capi_snap/ch_introduction.xml +++ b/enable_capi_snap/ch_introduction.xml @@ -52,24 +52,22 @@ xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="chapter_introduction"> A complete accelerator has software part (APP, or Applications) running on CPU Processor and the hardware part (AFU, Acceleration Function Unit) running on FPGA chip. APP and AFU are sharing host memory, that means, they both can read and write the 2^64 range of virtual memory address. To make it happen, CAPI technology has a CAPP (Coherent Acceleration Processor Proxy) logic unit in Processor chip, and also needs a PSL (Processor Service Layer) logic unit in FPGA chip. For CAPI1.0 and CAPI2.0, the interconnection between processor and FPGA is using PCIe physical links and PCIe form factor. CAPI1.0 uses PCIe Gen3x8. CAPI2.0 uses PCIe Gen4x8 or Gen3x16. - OpenCAPI is not covered in this document. Please check https://opencapi.org for more information. + OpenCAPI is not covered in this document. Visit https://opencapi.org for more information.
Enable PSL IP on FPGA - Let's focus on the FPGA side. - A customer FPGA card needs to have a PSL module (Processor Service Interface) to become a "CAPI-enabled" card. This PSL module is provided by OpenPower Foundation and is an IBM IP. + This document only applies to the cards using Xilinx FPGA chips. + A customer FPGA card needs to have a PSL module (Processor Service Interface) to become a "CAPI-enabled" card. This PSL module is provided by OpenPower Foundation. For CAPI1.0, PSL module and the surrounding board specific modules are provided in the form of a routed dcp file (Xilinx Vivado design checkpoint). It's usually called b_route_design.dcp. For CAPI2.0, PSL is an IP package with encrypted source code. It's named like ibm.com_CAPI_PSL9_WRAP_2.00.zip. They can be downloaded at https://www.ibm.com/systems/power/openpower. From the menu, select "CAPI","Coherent Accelerator Processor Interface (CAPI)" or directly click the "CAPI" icon to go to the CAPI section. Then download the appropriate files depending on your target system being POWER8 (CAPI 1.0) or POWER9 (CAPI 2.0). You need to register an IBM ID to download them. - For a new FPGA card, if you want to enable CAPI on it, it simply means to create a board supporting package which includes the PSL module onto the FPGA and let it work. There are two levels: HDK and SNAP. -
HDK - - For HDK, a project from FPGA Vendors (i.e, a Xilinx Vivado project) which is composed of BSP (Board Supporting Package, containing PSL module) and sample user logic (AFU), is delivered to acceleration developers. This project is called HDK (Hardware Development Kit). + Users can develop CAPI accelerators in two modes: HDK and SNAP. + HDK is the abbreviation of Hardware Development Kit. As shown in the diagram below, on the FPGA side, you need a Xilinx Vivado project which includes two parts: BSP (Board Supporting Package, containing PSL module) and AFU (Acceleration Function Unit). How to generate BSP will be introduced in Chapter
- Develop an acceleration on HDK + Develop an accelerator in HDK mode @@ -77,14 +75,23 @@ xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="chapter_introduction">
- The developers working on HDK level need to know the details about PSL interface specifications and write Verilog/VHDL logic to interact to it. Please refer to CAPI1.0 PSL Spec and CAPI2.0 PSL Spec or search "PSL/AFU interface" in your web browser. - As a full development environment, you also need SDK (Software Development Kit) which contains the example application software code and PSLSE (PSL Simulation Engine) for a software-hardware together simulation to guarantee the correctness of accelerator design. - HDK provides the maximum available FPGA resource area and the shortest latency. However, we recommend developers to work on SNAP because SNAP simplifies the developing work significantly. + AFU is where to implement user-defined functions. The developer working on AFU needs to understand the protocol between AFU and BSP, which is called PSL/AFU interface specification. Please refer to CAPI1.0 PSL Spec and CAPI2.0 PSL Spec or search "PSL/AFU interface" in your web browser. + When you develop an acceleration, you also need PSLSE (PSL Simulation Engine) for a software-hardware co-simulation to guarantee the correctness of accelerator design. + When you deploy the acceleration to OpenPower servers, it requires user library libcxl and kernel module cxl to run the application. + In all, HDK mode will provide the maximum control, utilization of resources and shortest latency. However, SNAP mode simplifies and standardizes the application development significantly and is more recommended.
SNAP - SNAP is the abbreviation of Storage, Networking and Analytics Programming. It is an open-source acceleration development framework https://github.com/open-power/snap. On the FPGA side, SNAP framework adds a PSL/AXI bridge, a DDR SDRAM controller and an optional NVMe controller. Thus, the developer can focus on their acceleration kernel logic (here we call it hardware action) and interface the framework via several AXI ports. -
+ SNAP is the abbreviation of Storage, Networking and Analytics Programming. It is an open-source acceleration development framework https://github.com/open-power/snap. The benefits are: + + On the FPGA side, SNAP framework adds a bridge to provide AXI interface to developers. So the developer can focus on acceleration function logic design, and doesn't need to study the details of PSL interface specification. AXI is the defacto industry standard for on-chip bus interconnections and is part of AMBA (Advanced Microcontroller Bus Architecture). + It also provides DDR SDRAM controller and an optional NVMe controller. The developer can use the card memory or storage directly. + SNAP supports using HLS (High Level Synthesis) to develop the acceleration functional unit ("Hardware Action" in yellow box). Developers can write C++ functions and Vivado HLS will compile/convert them to Verilog or VHDL design automatically. + A new layer of user library "libsnap" provides more convenient APIs. + SNAP is an integrated developing environment that the developer can configure, create Vivado project, run co-simulation or build bitstream with simple commands. + Many action examples help new developers to get started. + +
Develop an acceleration on SNAP @@ -92,18 +99,12 @@ xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="chapter_introduction">
- - This document focus on CAPI2.0. For CAPI1.0 enablement, the BSP part is a little different, please contact an IBM representative for more information. The SNAP part is the same. - In following chapters, we introduce how to: - - Enable BSP - Enable SNAP - + Equipping the new FPGA card with SNAP framework needs a few additional steps and is introduced in Chapter + This document focuses on CAPI2.0. For CAPI1.0 enablement, please contact capi-snap-doc@mailinglist.openpowerfoundation.org for more information. + It is assumed the reader knows how to work on Vivado Project and SNAP already. You can find many materials on how to develop an accelerator with SNAP (Training videos, "docs" folder on snap github, or other webpages) so they are not discussed in this document. - We assume the reader knows how to work on Vivado Project and SNAP already. You can find many materials on how to develop an accelerator with SNAP (Training videos, "docs" folder on snap github, or other webpages) so they are not discussed in this document.
-
diff --git a/enable_capi_snap/figures/psl_fpga.png b/enable_capi_snap/figures/psl_fpga.png index a5ff69a..45d99aa 100644 Binary files a/enable_capi_snap/figures/psl_fpga.png and b/enable_capi_snap/figures/psl_fpga.png differ diff --git a/enable_capi_snap/figures/snap.png b/enable_capi_snap/figures/snap.png index 56cc574..b38064b 100644 Binary files a/enable_capi_snap/figures/snap.png and b/enable_capi_snap/figures/snap.png differ