Skip to content

Meta-Repository for Bespoke Silicon Group's Manycore Architecture (A.K.A HammerBlade)

License

Notifications You must be signed in to change notification settings

NodLabs/bsg_bladerunner

 
 

Repository files navigation

BSG Bladerunner

This repository tracks releases of the HammerBlade source code and infrastructure. It can be used to:

HammerBlade Overview

HammerBlade is an open-source manycore architecture for performing efficient computation on large general-purpose workloads. A HammerBlade is composed of nodes attached to a general purpose host, simliar to a general-purpose GPU. Each node is a single an array of tiles interconnected by a 2-D mesh network attached to a flexible memory system.

HammerBlade is a Single-Program, Multiple-Data (SPMD) architecture: All tiles execute the same program on a different set of input data to complete a larger computation kernel. Programs are written in the CUDA-Lite lanaguage (C/C++) and executed on the tiles in parallel "groups", and sequential "grids". The CUDA-Lite host runtime (C/C++) manages execution parallel and sequential execution.

The HammerBlade is being integrated with higher-level parallel frameworks and Domain-Specific Languages. A Pytorch Pytorch backend is being developed to accelerate Machine Learning and a Graphit code-generator is being developed to support Graph Computations.

C/C++, Python, and Pytorch programs can interact with a Cooperatively Simulated (Cosimulated) HammerBlade Node using Synopysis VCS or Verilator. The HammerBlade Runtime and Cosimulation top levels are in BSG Replicant repository.

For a more in-depth overview of the HammerBlade architecture, see the HammerBlade Overview.

The architectural HDL for HammerBlade is in the BSG Manycore Repository and the BaseJump STL repositories. For technical details about the HammerBlade architecture, see the HammerBlade Technical Reference Manual

To run simulated applications on HammerBlade, or build FPGA images from this repository, follow the instructions below:

Requirements

CAD Tools

  • To simulate with VCS you must have VCS-MX installed. (After 2019, VCS-MX is included with VCS)

  • Verilator is built as part of the setup process.

  • To simulate or compile our AWS design, you must have Vivado 2019.1 installed and correctly configured in your environment. The Vivado tools must have the Virtex Ultrascale + Family device files installed. See page 40 in this guide If you are using Vivado 2019.1 you will need to apply the following AR before running simulation: https://www.xilinx.com/support/answers/72404.html.

The Makefiles will warn/fail if it cannot find the appropriate tools.

Packages

Building the RISC-V Toolchain requires several distribution packages. The following are required for CentOS/RHEL-based distributions:

libmpc autoconf automake libtool curl gmp gawk bison flex texinfo gperf expat-devel dtc cmake3 python3-devel

On debian-based distributions, the following packages are required:

libmpc-dev autoconf automake libtool curl libgmp-dev gawk bison flex texinfo gperf libexpat-dev device-tree-compiler cmake build-essential python3-dev

Setup: VCS (No AWS Simulation)

Non-Bespoke Silicon Group (BSG) users MUST have Vivado and VCS installed before these steps

The default VCS environment simulates the manycore architecture, without any closed-source or encrypted IP.

  1. Add SSH Keys to your GitHub account.

  2. Initialize the submodules: git submodule update --init --recursive

  3. (BSG Users Only: git clone git@bitbucket.org:taylor-bsg/bsg_cadenv.git)

  4. Run make -f amibuild.mk riscv-tools

Setup: Verilator (Beta)

Verilator simulates the HammerBlade architecture using C/C++ DPI functions instead of AWS F1 and Vivado IP.

  1. Add SSH Keys to your GitHub account.

  2. Initialize the submodules: git submodule update --init --recursive

  3. Run make verilator-exe

  4. Run make -f amibuild.mk riscv-tools

Setup: VCS (AWS)

Non-Bespoke Silicon Group (BSG) users MUST have Vivado and VCS installed before these steps

VCS simulates the FPGA design that is compiled for AWS F1 and uses Vivado IP.

  1. Add SSH Keys to your GitHub account.

  2. Initialize the submodules: git submodule update --init --recursive

  3. (BSG Users Only: git clone git@bitbucket.org:taylor-bsg/bsg_cadenv.git)

  4. Run make aws-fpga.setup.log

  5. Run make -f amibuild.mk riscv-tools

Examples

See bsg_replicant/README.md

Makefile targets

  • setup: Build all tools and updates necessary for cosimulation

  • build-ami : Builds the Amazon Machine Image (AMI) and emits the AMI ID.

  • build-tarball : Compiles the manycore design (locally) as a tarball

  • build-afi : Uploads a Design Checkpoint (DCP) to AWS and processes it into an Amazon FPGA Image (AFI) with an Amazon Global FPGA Image ID (AGFI)

  • print-ami : Prints the current AMI whose version matches FPGA_IMAGE_VERSION in project.mk

    You can also run make help to see all of the available targets in this repository.

Repository File List

  • Makefile provides targets cloning repositories and building new Amazon Machine images. See the section on Makefile Targets for more information.

  • amibuild.mk provides targets for building and installing the manycore tools on a Amazon EC2 instance. Indirectly used by the target build-ami in Makefile.

  • project.mk defines paths to each of the submodule dependencies

  • scripts: Scripts used to upload Amazon FPGA images (AFIs) and configure Amazon Machine Images (AMIs).

About

Meta-Repository for Bespoke Silicon Group's Manycore Architecture (A.K.A HammerBlade)

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 71.6%
  • Makefile 27.3%
  • Shell 1.1%