Skip to content

Commit

Permalink
updated dac2018
Browse files Browse the repository at this point in the history
updated ml papers from dac 2018.
congrats on your isca!
  • Loading branch information
kentaroy47 authored Jun 27, 2018
1 parent 59cf6bb commit 3438cc1
Showing 1 changed file with 17 additions and 8 deletions.
25 changes: 17 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ My name is Fengbin Tu. I'm currently pursuing my Ph.D. degree with the Institute
- 2015: [ISCA](#2015-isca), [ASPLOS](#2015-asplos), [FPGA](#2015-fpga), [DAC](#2015-dac)
- 2016: [ISSCC](#2016-isscc), [ISCA](#2016-isca), [MICRO](#2016-micro), [HPCA](#2016-hpca), [DAC](#2016-dac), [FPGA](#2016-fpga), [ICCAD](#2016-iccad), [DATE](#2016-date), [ASPDAC](#2016-aspdac), [VLSI](#2016-vlsi), [FPL](#2016-fpl)
- 2017: [ISSCC](#2017-isscc), [ISCA](#2017-isca), [MICRO](#2017-micro), [HPCA](#2017-hpca), [ASPLOS](#2017-asplos), [DAC](#2017-dac), [FPGA](#2017-fpga), [ICCAD](#2017-iccad), [DATE](#2017-date), [VLSI](#2017-vlsi), [FCCM](#2017-fccm), [HotChips](#2017-hotchips)
- 2018: [ISSCC](#2018-isscc), [ISCA](#2018-isca), [HPCA](#2018-hpca), [ASPLOS](#2018-asplos), [FPGA](#2018-fpga), [DATE](#2018-date), [ASPDAC](#2018-aspdac), [VLSI](#2018-vlsi)
- 2018: [ISSCC](#2018-isscc), [ISCA](#2018-isca), [HPCA](#2018-hpca), [ASPLOS](#2018-asplos), [FPGA](#2018-fpga), [DATE](#2018-date), [ASPDAC](#2018-aspdac), [VLSI](#2018-vlsi), [DAC](#2018-dac)

- [Important Topics](#important-topics)
- [Tutorial and Survey](#tutorial-and-survey)
Expand Down Expand Up @@ -367,12 +367,12 @@ This is a collection of conference papers that interest me. The emphasis is focu
- **Gist: Efficient Data Encoding for Deep Neural Network Training.** (Michigan, Microsoft, Toronto)
- **The Dark Side of DNN Pruning.** (UPC)
- **Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks.** (Michigan)
- EVA^2: Exploiting Temporal Redundancy in Live Computer Vision. (Cornell)
- Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision. (Rochester, Georgia Tech, ARM)
- Feature-Driven and Spatially Folded Digital Neurons for Efficient Spiking Neural Network Simulations. (POSTECH/Berkeley, Seoul National)
- Space-Time Algebra: A Model for Neocortical Computation. (Wisconsin)
- Scaling Datacenter Accelerators With Compute-Reuse Architectures. (Princeton)
- Enabling Scientific Computing on Memristive Accelerators. (Rochester)
- **EVA^2: Exploiting Temporal Redundancy in Live Computer Vision.** (Cornell)
- **Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision.** (Rochester, Georgia Tech, ARM)
- **Feature-Driven and Spatially Folded Digital Neurons for Efficient Spiking Neural Network Simulations.** (POSTECH/Berkeley, Seoul National)
- **Space-Time Algebra: A Model for Neocortical Computation.** (Wisconsin)
- **Scaling Datacenter Accelerators With Compute-Reuse Architectures.** (Princeton)
- **Enabling Scientific Computing on Memristive Accelerators.** (Rochester)

### 2018 DATE
- **MATIC: Learning Around Errors for Efficient Low-Voltage Neural Network Accelerators.** (University of Washington)
Expand All @@ -383,7 +383,16 @@ This is a collection of conference papers that interest me. The emphasis is focu
- *Decomposes convolution into multiple dense and zero kernels for sparsity savings.*
- **Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA.** (CAS)
- **moDNN: Memory Optimal DNN Training on GPUs.** (University of Notre Dame, CAS)
- HyperPower: Power and Memory-Constrained Hyper-Parameter Optimization for Neural Networks. (CMU, Google)
- **HyperPower: Power and Memory-Constrained Hyper-Parameter Optimization for Neural Networks.** (CMU, Google)

### 2018 DAC
- **DPS: Dynamic Precision Scaling for Stochastic Computing-Based Deep Neural Networks.** (UNIST)
- **DyHard-DNN: Even More DNN Acceleration With Dynamic Hardware Reconfiguration** (Univ. of Virginia)
- **Exploring the Programmability for Deep Learning Processors: from Architecture to Tensorization** (Univ. of Washington)
- **LCP: Layer Clusters Paralleling Mapping Mechanism for Accelerating Inception and Residual Networks on FPGA** (THU)
- **Ares: A Framework for Quantifying the Resilience of Deep Neural Networks** (Harvard)
- **Loom: Exploiting Weight and Activation Precisions to Accelerate Convolutional Neural Networks** (Univ. Tronto)
- **Parallelizing SRAM Arrays with Customized Bit-Cell for Binary Neural Networks** (Arizona)

## Important Topics
This is a collection of papers on other important topics related to neural networks. Papers of significance are marked in **bold**. My comments are in marked in *italic*.
Expand Down

0 comments on commit 3438cc1

Please sign in to comment.