Brain-Inspired Hyperdimensional Computing

The mathematical properties of high-dimensional spaces show remarkable agreement with behaviors controlled by the brain. Brain-inspired hyperdimensional (HD) computing explores the emulation of cognition by computing with hypervectors as an alternative to computing with numbers. Hyperdimensional (HD) computing captures the idea of pattern recognition by modeling each neural activity with a hypervector, a vector with dimensionality in the thousands. Hypervectors are high-dimensional (e.g., D=10,000), holographic, and (pseudo)random with independent and identically distributed (i.i.d.) components. The HD has several superb properties:
- General and scalable model of computing with well-defined set of arithmetic operations
- Fast and one-shot learning
- A memory-centric architecture with significantly parallelizable operations
- Extremely robust against most failure mechanisms and noise
In hardware prospective, we exploit such architectural insight in three widely-used methodological design approaches for developing scalable and efficient associative memories. At its very core, HD computing is about manipulating and comparing large patterns, stored in memory as hypervectors: the input symbols are mapped to a hypervector and an associative search is performed for reasoning and classification. For every classification event, an associative memory is in charge of finding the closest match between a set of learned hypervectors and a query hypervector by using a distance metric. Hypervectors with the i.i.d. components qualify a memory-centric architecture to tolerate massive number of errors, hence it eases cooperation of various methodological design approaches for boosting energy efficiency and scalability. We design architectures for hyperdimensional associative memory (HAM) to facilitate energy-efficient, fast, and scalable search operation using three widely-used design approaches. These HAM designs search for the nearest Hamming distance, and linearly scale with the number of dimensions in the hypervectors while exploring a large design space with orders of magnitude higher efficiency.
Selected publication:
[ICCAD'19] M. Imani, S. Bosch, M. Javaheripi, B. Rouhani, X. Wu, F. Koushanfar, T. Rosing, “SemiHD: Semi-Supervised Learning Using Hyperdimensional Computing”, IEEE/ACM International Conference On Computer Aided Design (ICCAD), 2019.
[DAC'19] M. Imani, J. Morris, J. Messerly, H. Shu, Y. Deng, T. Rosing, “BRIC: Locality-based Encoding for Energy-Efficient Brain-Inspired Hyperdimensional Computing”, IEEE/ACM Design Automation Conference (DAC), 2019 (Best paper candidate) [PDF].
[FPGA'19] S. Salamat, M. Imani, B. Khaleghi, T. Rosing, "F5-HD: Fast Flexible FPGA-based Framework for Refreshing Hyperdimensional Computing" ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA), 2019 (acceptance rate 19.8%) [PDF].
[DATE'19] M. Imani, J. Messerly, F. Wu, W. Pi, T. Rosing, “A Binary Learning Framework for Hyperdimensional Computing”, IEEE/ACM Design Automation and Test in Europe Conference (DATE), 2019 [PDF].
[DAC'18] M. Imani, C. Huang , D. Kong, T. Rosing, “Hierarchical Hyperdimensional Computing for Energy Efficient Classification”, IEEE/ACM Design Automation Conference (DAC), 2018 [PDF].
[HPCA'17] M. Imani, A. Rahimi, D. Kong, T. Rosing, J. M. Rabaey “Exploring Hyperdimensional Associative Memory”, in International Symposium on High-Performance Computer Architecture (HPCA), 2017 [PDF].
[D&T'17] M. Imani, A. Rahimi, J. Hwang, T. Rosing, J. M. Rabaey “Low-Power Sparse Hyperdimensional Encoder for Language Recognition”, IEEE Design & Test, 2017.