diff --git "a/0NAzT4oBgHgl3EQf8f6B/content/tmp_files/load_file.txt" "b/0NAzT4oBgHgl3EQf8f6B/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/0NAzT4oBgHgl3EQf8f6B/content/tmp_files/load_file.txt" @@ -0,0 +1,1232 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf,len=1231 +page_content='1 FireFly: A High-Throughput and Reconfigurable Hardware Accelerator for Spiking Neural Networks Jindong Li , Guobin Shen , Dongcheng Zhao , Qian Zhang , Zeng Yi Abstract—Spiking neural networks (SNNs) have been widely used due to their strong biological interpretability and high energy efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' With the introduction of the backpropagation algorithm and surrogate gradient, the structure of spiking neural networks has become more complex, and the performance gap with artificial neural networks has gradually decreased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, most SNN hardware implementations for field-programmable gate arrays (FPGAs) cannot meet arithmetic or memory effi- ciency requirements, which significantly restricts the development of SNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' They do not delve into the arithmetic operations between the binary spikes and synaptic weights or assume un- limited on-chip RAM resources by using overly expensive devices on small tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' To improve arithmetic efficiency, we analyze the neural dynamics of spiking neurons, generalize the SNN arithmetic operation to the multiplex-accumulate operation, and propose a high-performance implementation of such operation by utilizing the DSP48E2 hard block in Xilinx Ultrascale FPGAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' To improve memory efficiency, we design a memory system to enable efficient synaptic weights and membrane voltage memory access with reasonable on-chip RAM consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Combining the above two improvements, we propose an FPGA accelerator that can process spikes generated by the firing neuron on-the-fly (FireFly).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' FireFly is implemented on several FPGA edge devices with limited resources but still guarantees a peak performance of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='53TSOP/s at 300MHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' As a lightweight accelerator, FireFly achieves the highest computational density efficiency compared with existing research using large FPGA devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Index Terms—Spiking Neural Networks, Field-programmable gate array, Hardware Accelerator I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' INTRODUCTION S PIKING neural networks (SNNs) are considered the third generation of artificial neural networks (ANNs) [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' They were developed to mimic the operational mechanism in the Manuscript created January 1, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' XDB32070100).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' (Corresponding authors: Qian Zhang;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yi Zeng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=') Jindong Li and Qian Zhang are with the Research Center for Brain- Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China (e-mail: lijindong2022@ia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='cn, q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='zhang@ia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='cn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Guobin Shen is with the Research Center for Brain-Inspired Intelli- gence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the School of Future Technology, University of Chinese Academy of Sciences, Beijing 100049, China (e-mail: shen- guobin2021@ia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='cn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Dongcheng Zhao is with the Research Center for Brain-inspired Intel- ligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China (e-mail: zhaodongcheng2016@ia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='cn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yi Zeng is with the Research Center for Brain-inspired Intelligence, the National Laboratory of Pattern Recognition, at the Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and University of Chinese Academy of Sciences, Beijing 100049, China, and Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China (e-mail: yi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='zeng@ia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='cn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' human brain, where information is communicated via spikes among neurons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Surrogate gradient algorithms have been introduced for SNNs tackling nondifferentiable problems to enhance the learning capability of SNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [2], [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Recent advances in SNNs have demonstrated comparable performance to non-spiking ANNs [4]–[8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, compared to the extensive work on ANN accelerators [9]–[11], the existing SNN hardware accelerator still lags, limiting the practical applications of SNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Most research ignores the importance of efficiently imple- menting arithmetic operations in SNN accelerators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In Field- programmable gate array (FPGA) design, using the built- in dedicated hard block to implement arithmetic operations can achieve considerably higher performance than its general logic fabric counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fabric-only implementations in an arithmetic-extensive application can lead to a compromised clock frequency and even routing failures when the fabric consumption is high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, in the SNN accelerator design, the register transfer level (RTL) description of the SNN arithmetic operation cannot be automatically synthesized into the dedicated arithmetic hard block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Therefore, most SNN accelerators adopt the fabric-only implementation without further optimizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Although a single arithmetic operation unit in an SNN accelerator consumes considerably fewer resources than a multiply-accumulate (MAC) unit in an ANN accelerator design, hardware optimization of such operation can still significantly impact the system’s performance when the unit is instantiated hundreds or even thousands of times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In the Xilinx Ultrascale FPGA, the dedicated arithmetic hard block, or the DSP48E2, enhances the speed and efficiency of many operations, including multiplication, addition, wide bus multiplexing, pattern detection, and single instruction multiple data (SIMD) operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' It is possible to generalize the SNN computation to the arithmetic operations that the DSP48E2 can provide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Another important aspect of the SNN accelerator design is the memory system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When scaling the parallelism, the memory bandwidth imbalance between the binary input-output spikes, the multi-bit synaptic weights, and the multi-bit mem- brane voltage becomes problematic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' While the computational complexity and the memory footprint of the binary spikes decrease, the memory access requirements of synaptic weights and membrane voltage do not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The off-chip memory access bandwidth needed by the weights and membrane voltage cannot fully support the increased parallelism brought by the hardware-friendly synaptic operations and storage-friendly binary spikes without further exploration of the reuse mecha- nism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Most hardware accelerators assume large on-chip mem- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='01905v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='NE] 5 Jan 2023 ID2 ory, store all the synaptic weights, and accumulate membrane voltage on-chip to ease the harsh bandwidth requirement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' This method is not scalable, especially when the model gets larger and targets edge FPGA devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A scalable memory system for synaptic weights and membrane voltage balancing, as well as off-chip data access and on-chip data buffering, should be developed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' At present, most existing neuromorphic hardware or accel- erators focus on brain simulation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' While these hard- ware designs claim to support event-driven processing, they are inefficient in terms of resource utilization, computational density, and scalability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In real-world SNN applications, it is not feasible to use overly expensive and large FPGA devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A lightweight and high-performance SNN accelerator targeting resource-constrained edge scenarios should be developed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Focusing on these aspects, we propose FireFly, a high throughput and reconfigurable FPGA accelerator that can achieve both arithmetic and memory efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Our contri- butions can be summarized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1) We generalize the SNN arithmetic operation to the multiplex-accumulate operation and propose a high- performance implementation of such an operation by utilizing the DSP48E2 hard block in Xilinx Ultrascale FPGAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 2) We design a synaptic weight delivery hierarchy and a partial sum and membrane voltage (Psum-Vmem) unified buffer to balance the off-chip memory access bandwidth and on-chip RAM consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 3) We evaluate multiple deep SNN models on various datasets and achieve faster inference speed and higher classification accuracy than the existing research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We implement FireFly on several commercial off-the-shelf FPGA edge devices with limited resources, bringing hope for real-world SNN applications in edge scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' RELATED WORK The existing dedicated neuromorphic hardware designed for SNN can be categorized into three types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The majority of neuromorphic hardware constructs its hard- ware substrates in a Network on Chip fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Loihi [12], Tianji Chip [13], Spinnaker [14] and TrueNorth [15] fall into this category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In these hardware designs, neurons are grouped into multiple neurocores, which communicate via spikes through the Network-on-Chip (NoC), and spike messages are scheduled by dedicated routers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' These hardware architectures are compatible with the event-driven nature of SNNs, as spike events are generated, transferred, and processed only if the neuron fires.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, these neuromorphic hardware designs place rigid restrictions on the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The SNN networks are distributed among the neurocores, and the total number of neurons in the model cannot exceed the maximum capacity of the hardware, not to mention the harsh fan-in and fan-out hardware limitations of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The second type of neuromorphic hardware explores emerg- ing devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The BrainScale [16] developed by Heidelberg University emulated spiking neural networks on analog neu- romorphic hardware and achieved several advantages over conventional computers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Some research explores new materials like mem-resistors and optics [17]–[19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, the low precision and uncertain nature of the hardware prevent them from being used in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The third type of neuromorphic hardware follows the scheme of the ANN accelerator design except for construct- ing dedicated hardware for synaptic operations and explores optimal dataflow for SNNs specifically [20]–[26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' These types of work require less area cost and achieve higher computing resource utilization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fine-grained parallelism of the accelerator can enable high-performance computing of the SNN compared with the sequential spike processing mechanism of the NoC counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' This type of hardware has the fewest restrictions on the network models and can quickly adapt to emerging neuromorphic research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' FPGA platforms are the ideal choice for this type of hardware due to their flexibility and reconfig- urability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' While FireFly belongs to this category, its contributions of FireFly are largely complementary to the existing work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' SyncNN [21] proposed a novel synchronous event-driven SNN reconfigurable inference engine and evaluated multiple SNN models on multiple FPGA devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [27] proposed a holistic optimization framework for the encoder, model, and architecture design of FPGA-based neuromorphic hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, these designs are based on high-level synthesis, thus inducing large resource redundancy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [28], [29] and Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [30] explored spatial- temporal parallelism by unrolling the computations in both the spatial and time dimensions and achieved significant accel- eration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, parallelization across multiple time points violates the time-related sequential nature of the membrane voltage update behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' SpinalFlow [25] achieved significant sparsity acceleration by adopting a different input/output spike representation to skip the non-spike computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' SATO [31] achieved high- speed inference by incorporating a temporal-oriented dataflow and a bucket-sort-based dispatcher to balance the workload.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, these techniques only work for temporal coding SNNs, limiting the accuracy of the SNN models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' DeepFire [23] was the first research migrating DSP48E2s into neuron core design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, they did not delve into the function of DSP48E2 and still induce large fabric overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We argue that with careful register transfer level (RTL) design, focusing on optimizing spatial parallelism on FPGA, adopting regular and simple time-step CNN-like processing, and fully utilizing the multi-function DSP48E2, we can still achieve impressive inference throughput on small FPGA edge devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' FireFly is more applicable in real-world applications where design space exploration is constrained by limited resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' SNN BASICS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Spiking Neuron Model Spiking neurons are the basic units of SNNs, which are con- nected through weighted synapses and transmit information through binary spikes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Although more complex and detailed neuron models such as Izhikevich [32] and Hodgkin–Huxley 3 [33] can accurately model a biological neuron’s behavior, simpler models such as Integrate and Fire (IF) [34] and Leaky Integrate and Fire (LIF) [35] are used more often in current SNN applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' An IF neuron integrates its inputs over multiple timesteps and generates a spike whenever the integrated membrane voltage surpasses a firing threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A LIF neuron acts the same except for the leaky behavior of the membrane voltage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The neural dynamics of a LIF neuron membrane potential u can be described as: τm du dt = −u + R · I(t), u < Vth (1) where Vth denotes the threshold, I denotes the input current, R denotes the resistance, and τm is the membrane time constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A spike is generated when u reaches Vth and u is reset to resting potential urest, which is set to 0 in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The membrane potential’s neural dynamics can be divided into three phases, and each phase can be described in a discrete computational form:: Input current integration phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' All the presynaptic currents generated by the presynaptic spikes are integrated at each discrete timestep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' I[t] = � j wijsj[t] + bi (2) where the subscript i represents the ith neuron, wij is the synaptic weight from neuron j to neuron i, and bi is a bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Membrane potential update phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The membrane potential of each neuron is updated by the integrated presynaptic currents at each timestep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' vi[t] = (1 − 1 τm )ui[t] + I[t] (3) where (1 − 1 τm ) < 1 denotes the leaky term, which is ignored when using the IF model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Output spike generation phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Whenever the membrane potential reaches the firing threshold, the neuron generates an output spike and resets its membrane potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' (ui[t + 1], si[t + 1]) = � (vi[t], 0), vi[t] < Vth (0, 1), vi[t] ≥ Vth (4) In these three phases, we have two key observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The input current integration phase completely dominates the total computational cost due to the high degree of synaptic connec- tivity and a large number of neurons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The membrane potential update phase has the harshest storage requirement because the membrane potential is read and written back and forth in every timestep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We will focus on these two aspects in the following sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Dataflow and Parallelism Scheme for SCNN Similar to convolutional neural networks (CNNs), convolu- tional layers dominate the total computational cost in spiking convolutional neural networks (SCNNs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We mainly focus on the dataflow optimizations of the convolutional layers and show that the dataflow can be migrated to fully connected layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Algorithm 1: Pseudo Code of FireFly Architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Input: Given the binary spike map size (H, W), input-output channels (Cin, Cout), kernel size (Kh, Kw), total timestep T, leaky factor λ, threshold Vth and parallelism factor P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Divide the input output channels into (ci = ⌈ Cin P ⌉, co = ⌈ Cout P ⌉) groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Input: T × ci fragments of I[P][H × W] streams, each stream passes the hardware for co times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Output: co × T fragments of O[P][H × W] streams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1 Create buffer for synaptic weights: W[P][Cin][Kh][Kw];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 2 Create buffer for Psum/Vmem: V [P][H × W];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 3 for po ← 0 to co do 4 Load Weights: W[P][Cin][Kh][Kw];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 5 for t ← 0 to T do 6 for pi ← 0 to ci do 7 for s ← 0 to H × W do 8 Unroll and pipeline;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 9 for o ← 0 to P do 10 for i ← 0 to P do 11 w = W[o][pi × P + i][0 → Kh][0 → Kw];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 12 i = neighbour (I[i][s]);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 13 V [o][s]+ = w · i;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 14 end 15 end 16 if pi = ci − 1 then 17 V [o][s]× = (1 − λ);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 18 if V [o][s] > Vth then 19 V [o][s] = 0, O[o][s] = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 20 else 21 O[o][s] = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 22 end 23 if t = T − 1 then 24 V [o][s] = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 25 end 26 end 27 end 28 end Input/Output spike representation varies in different neu- romorphic hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Most SNN hardware implementations adopt the Address-Event-Representation (AER) data format to transmit spikes between neurons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The standard AER package for one spike includes the spiking neuron’s input location and the spike’s timestamp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Although the AER data format is compatible with the event-driven nature of SNNs, multiple bits are needed to express the original single-bit spike event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The logic and storage overhead may not be worth it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' This paper adopts the original single-bit format to represent the binary spikes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' At any discrete timestep t in the digitalized SCNN, the output spikes of all the neurons in one channel of the convolutional layer can be considered a timestep snapshot in the form of a binary map [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In this case, the input- current integration phase computation process of the SNNs is 4 AXI DataMover Read Addr InSpike Stream PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE PE W W W W W W W W FIFO FIFO FIFO FIFO CONV or MLP AXI DataMover Write Addr OutSpike Stream AXI DataMover Read Addr Weight Stream Maxpool En?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' FIFO MAX BRAM .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' LineBuffer for Conv Shift Reg for MLP CONV or MLP Maxpool En?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Bypass Lv1 Lv2 Lv3 Lv4 Weight Delivery Hierchy .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' BRAM BRAM BRAM Update Engine Psum-Vmem Unified Buffer 8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' DSP48E2 Chain PS PL ARM CPU DDR4 AXI Interconnect DDR Controller Mem Fabric(LUT) Register(FF) DSP48E2 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' FireFly Architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' almost the same as that of the traditional ANNs except for the additional time dimension and the changed operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The set of computations for the complete SNN convolutional layer that receives a single batch of input can be formulated as a loop nest over these 7 variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' All permutations of these 6 loop variables, except for the timestep variable, are legal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Permutations of the loop variables open up the possibility of different dataflow choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The tiling of the loop variables opens up the possibility of different parallelism schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Different permutations of the loop variables adopt different kinds of dataflow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Different dataflow schemes for convolution have been extensively studied by Eyeriss [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The key con- sideration is how to minimize data movement and maximize data reuse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In SCNN, synaptic connection weights need to be fetched and membrane voltage needs to be updated at every time timestep, due to the unique time dimension in SNN computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Therefore, output and weight stationary dataflow can minimize the data movement of the multi-bit membrane voltage and synaptic weight data between on-chip logic and off-chip memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Different tiling strategies for the loop variables enable different parallelism schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The tiling of the loop variables can induce data reordering or data segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We argue that it is important to keep the input and output spike arrangements the same to enable spikes to be processed in an on-the-fly fashion without complicated data reaarangement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We chose the spatial tiling of the input and output channel dimensions rather than tiling within the same spike feature map to avoid data rearranging or irregular off-chip data access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Adopting the dataflow and parallelism scheme above, the pseudo-code of the FireFly is described in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' HARDWARE ARCHITECTURE A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Architecture Overview In this section, the digital design of SNNs is discussed in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1 shows the overall system design of FireFly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' FireFly targets heterogeneous Zynq Ultrascale devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The central processing unit (CPU) of the processing system (PS) acts as the controller for system state control and external memory access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The programmable logic (PL) accelerates the SNN inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' AXI DataMover IP, instead of AXI DMA IP, enables high- throughput and low-latency data transactions between the off- chip DRAM and on-chip memory storage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The unique store and forward feature of AXI DataMover is enabled to allow multiple outstanding requests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The weight-stationary systolic array is responsible for the acceleration of SNN arithmetic operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The systolic array consists of several DSP48E2 chains and multiple adder trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A weight matrix delivery hierarchy is proposed to enable efficient weight loading to the systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Two separate datapaths for convolutional and fully connected layers are de- signed to generate binary spike vectors for the systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A Psum-Vmem unified buffer and update engine is constructed to support back-and-forth membrane potential update and IF/LIF neuron dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' An optional MaxPooling unit is placed on the output spike datapath to support on-the-fly pooling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The designs of the systolic array, the spike vector generation unit, the synaptic weight delivery hierarchy, and the Psum- Vmem unified buffer are elaborated in detail below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Synaptic Operations Featured by DSP48E2 As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='2A, DSP48E2 is the dedicated digital signal processing logic block in the Xilinx Ultrascale series FPGA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Most FPGA neuromorphic hardware simply treats them as multipliers and leaves them underutilized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, they enhance the speed and efficiency of many applications far beyond multiplication-based digital signal processing [37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Considering customizing arithmetic operations for the SNN model, the mathematical dot product operation between the binary spike and the synaptic weight can be modeled as a multiplex-accumulate operation which in this paper, we call the synaptic operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The spike acts like the control signal of ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Y ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Z ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='X ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='W ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='27 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='27 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='D ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='C ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='INMODE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CARRYIN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='OPMODE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CARRYINSEL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='BCIN* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ACIN* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='BCOUT* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ACOUT* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A:B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='RND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='U ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='V ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='17Bit Shift ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='17Bit Shift ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ALUMODE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CARRYOUT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PATTERNDETECT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PATTERNBDETECT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PCIN* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CARRYCASCIN* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='MULTSIGNIN* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CREG/C Bypass/Mask ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='MULTSIGNOUT* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PCOUT* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='XOR OUT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CARRYCASCOUT* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Dual B Register ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Dual A D ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='and Pre-Adder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='MULT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='27x18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Y ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Z ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='X ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='W ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='27 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='27 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='D ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='C ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='INMODE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CARRYIN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='OPMODE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CARRYINSEL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='BCIN* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ACIN* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='BCOUT* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ACOUT* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A:B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='RND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='U ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='V ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='17Bit Shift ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='17Bit Shift ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='ALUMODE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CARRYOUT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PATTERNDETECT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PATTERNBDETECT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PCIN* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CARRYCASCIN* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='MULTSIGNIN* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CREG/C Bypass/Mask ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='MULTSIGNOUT* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PCOUT* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='XOR OUT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='CARRYCASCOUT* ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Dual B Register ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Dual A D ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='and Pre-Adder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='MULT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='27x18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='B2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='B2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='B1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='B1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='C ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='SIMD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Add ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='OP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='W ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='X ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Y ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Z ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PCIN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PCOUT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='OPMODE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='B2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='B1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='C ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='SIMD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Add ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='OP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='W ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='X ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Y ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Z ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PCIN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='PCOUT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='OPMODE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Shared ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='OPMODE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='SIMD=4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4x4=16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Spike ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Operations ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='C) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='A) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='B) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Implementing Synaptic operations Using DSP48E2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A) The functional circuit diagram of a single DSP48E2 slice [37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' B) A simplified functional circuit diagram of the DSP48E2 performing spike-based computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' C) An equivalent circuits of the DSP48E2 when SIMD mode is enabled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' the multiplexer, switching the synaptic weight on or off when the neuron is firing or resting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The following adder sums up all the synaptic weights coming from the firing neuron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In traditional ANNs, one operation usually refers to one two-operand multiplication or two-operand addition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In SNNs, we define one synaptic operation as one 2:1 multiplexing or two-operand addition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We show that the dedicated DSP48E2 unit can provide up to 16 synaptic operations at high speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' This technique is described in detail below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When the first stage multiplier in DSP48E2 is disabled, ALUMODE control bits are all cleared and carry inputs are ignored, the simplified DSP slice operation shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 2B in the ALU stage can be expressed as: Post Adder Out = W + X + Y + Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' where W, X, Y and Z are four built-in 48-bit wide bus multiplexers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Moreover, the post-adder can be statically con- figured into SIMD mode, supporting a single 48-bit adder, dual independent 24-bit adders, or quad independent 12-bit adders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The outputs of the four multiplexers are always added together by the post-adder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' There are dozens of combinations of inputs to these multiplexers: one of them can be: either C or all 0s on the X multiplexer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' either A:B or all 0s on the X multiplexer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' all 0s on the Y multiplexer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' either P, PCIN, or all 0s on the Z multiplexer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The 30-bit A and 18-bit B data inputs can optionally be registered once or twice to construct a pipeline stage, while the 48-bit C data inputs can be optionally staged once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The post-adder’s output can be staged into the P register, and the PCIN is the cascade input from a lower DSP slice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A nine-bit control input named OPMODE contains fields TABLE I RESOURCE UTILIZATION COMPARISON.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' DSP48E2 LUT FF CARRY8 DSP 1 0 0 0 Fabric 0 86 114 8 for W, X, Y, and Z multiplexer selects and can be dynamically changed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Utilizing the wide bus multiplexer, the cascade datapath, and the SIMD mode of the post-adder in DSP48E2, we can pack up to 16 sets of synaptic operations into a single DSP slice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In this work, the synaptic connection weights are quantized into INT8 by the well-established post-training quantization or quantization-aware training methods developed in traditional neural networks (NNs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Four sets of INT8 weights are resized to INT12 and concate- nated into 48-bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The upper 30 bits are assigned to the input port A while the lower 18 bits are assigned to input port B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A and B get concatenated and multiplexed by the X multiplexer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In NNs, the input activations are shared by different sets of weights to generate different channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In this case, one spike is fetched to dynamically switch the X multiplexer between the four sets of weights (A:B) and all 0s, performing four 2:1 multiplex operations simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Similarly, another four sets of INT8 weights are resized, concatenated, and directly assigned to the C data input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' another spike is fetched to dynamically switch the W multiplexer between C and all 0s, performing another four 2:1 multiplex operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The Z multiplexer selects the PCIN inputs and the partial sum from the lower DSP slice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The Y multiplexer outputs are set to all 0s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The post-adder is set to SIMD mode and acts as four independent 12-bit adders, summing the four multiplexers, and performing an equivalent number of eight addition operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Therefore, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='2C, a single slice of DSP48E2 can contribute 16 synaptic operations in total without general fabric logic overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Direct access to the specific features in DSP48 is achieved by directly instantiating the DSP48E2 primitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The straight- forward implementation of the synaptic operations described above will consume 86 Look-up-tables, 114 Flip-flops and 8 Carry chains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Though it might not seem expensive on a small scale, it is considerably less efficient than the proposed approach and will lead to a compromised clock frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Systolic Array for Synaptic Operations The systolic array is a specialized mesh of homogeneous PEs designed to process massive parallel computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' It has the potential to run at a high frequency due to its regular and adjacent interconnections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, designing systolic arrays is not trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Previous neuromorphic hardware adopting a systolic array architecture failed to achieve satisfactory performance, either in resource efficiency or clock frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Most systolic arrays targeting FPGA devices are implemented in low-speed general fabrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In this paper, we design a 6 high-performance systolic array featured by the DSP48E2 for SNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A more straightforward representation of the aforemen- tioned synaptic operations featured by a single DSP48E2 slice can be expressed as follow: pi = si · Wi + pi−1, p−1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' where si is the 1 × 2 binary spike vector, and wi is the 2 × 4 INT8 synaptic weights matrix, pi is the 1 × 4 partial sum vector, and the pi−1 is the partial sum vector contributed by the lower DSP slice with the same shape as pi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' · represents the spikes-weights vector-matrix multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The 12-bit representation of each channel in pi allows up to eight DSP48E2 slices to cascade in a row without possible numeric overflow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In this way, the extended synaptic operations featured by a cascaded DSP48E2 chain can be expressed as follows: p = 7 � i=0 si · Wi = s · W .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Where s is the 1 × 16 binary spike vector, and W is the 16 × 4 8-bit-integer (INT8) synaptic weights matrix, p is the 1 × 4 partial sum vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The cascaded DSP48E2 chain is the basic processing ele- ment (PE) in our systolic array design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A PE consists of eight cascaded DSP48E2 slices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A M × N systolic array consists of M 4 columns of PE, with each column consisting of N 16 PEs and an adder tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Each column in the systolic array computes N 16 1 × 16 binary spike vector and 16 × 4 weight matrix multiplication, while the adder tree sums up the results from N 16 PEs, generating four output channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' With M 4 columns, the systolic array generates M outputs channels in total.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Each PE in the systolic array contains different sets of synaptic weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Adopting a weight-stationary scheme, synaptic weights remain cached in a PE until they are no longer needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The same 1 × N binary spike vector is shared across columns horizontally, and M partial sums flow out of the systolic array vertically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Spike Vector Generation for Convolution by Line Buffer Similar to ANN, 2-D convolution is the basic operation in a digitalized SCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We incorporate the traditional line buffer design [38] to generate the spike window needed for the spike- map convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The line buffer is commonly seen in CNN accelerator design because it can efficiently achieve kernel- level parallelism and ensure good reuse of image data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When FireFly is configured to SCNN mode, Cin channels of binary spike map are bundled together and stream into the line buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The Kh×Kw spikes-bundle window is then flattened to a Kh×Kw ×Cin vector and sent to the systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In most of the established CNN architectures, 3 × 3 convolution with stride 1 and the same padding is the most common configu- ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The SCNN architecture follows this scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Ideally, general neuromorphic hardware for SNN should support all types of convolutional layers with different configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' But the hardware would not work efficiently for all types of convolution configuration and such design would cause hardware overhead, thus might not be feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Therefore, we design specialized line buffer logic for 3 × 3 convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Nevertheless, the methods discussed here are compatible with other kernel sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Using the Dynamic Function Exchange features in FPGA, hardware supporting different types of convolutional layers can be dynamically deployed in FPGA during runtime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When FireFly is configured for multi-layer perception (MLP) topology mode, the line buffer datapath for SCNN is left idle and the shift register datapath for MLP is switched on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The shift register forms a serial-to-parallel stream width adapter by combining the Cin input spikes of Kh × Kw input transactions into one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The length of the binary spike vector in SCNN and MLP datapaths is the same, compatible with the height of the systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Synaptic Weight Delivery in a Multi-level Hierachy An M × N systolic array configured in weight stationary mode needs M × N sets of weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Switching the current set of stationary synaptic weights with the next set of weights can be problematic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The instantaneous switching bandwidth is extremely high but switching occurs when weights expire.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The main idea of our solution is that the instantaneous bandwidth needed when switching to the next set of weights needs to be amortized over an idle period when the weights are kept stationary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='3D, we propose a 4-level synaptic weight memory hierarchy to enable on-the-fly delivery of weights with minimum resource consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' First, the synaptic weight stream coming from the AXI DataMover is adapted by the Lv1 stream width adapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The adapted weight stream flows into the Lv2 Partial Reuse FIFO and is reused T times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The weight stream from the Partial Reuse FIFO stage its way through the Lv3 width adapter and then gets cached in the Lv4 skid buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The systolic array holds the current set of weights stationary by applying back pressure to the skid buffer and releasing the pressure when the current set of weights is no longer needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A stream width adapter converts the N-bit input stream to a N × M-bit output stream by allocating M elements of the input stream and firing them all at once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A skid buffer is the smallest Pipeline FIFO Buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' It decouples two sides of a ready/valid handshake to allow back-to-back transfers without a combinational path between input and output, thus pipelining the path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The Partial Reuse FIFO is the key component in this 4-level synaptic weight delivery hierarchy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Most designs utilize the dual-port RAM to build a ping- pong buffer (shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 3A) or a FIFO, to hide the latency of the data transfer process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, the traditional ping- pong buffer mechanism can be problematic and the FIFO mechanism does not support data reuse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The switching of the ping-pong buffer may complicate the controller design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Ping-pong buffers are costly and inefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The depth of the buffer must be large enough to support the most storage-expensive cases,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' not to mention the buffer 7 In Stream Out Stream PushPtr PopPtr when(Overflow) Start=End PushPtr PopPtr In Stream Out Reused Stream Available Reusing To be Reused Active Current Write Next Read Current Read Next Write A) B) Lv3 Serial to Parallel Lv4 Skid Buffer To Systolic Array From DDR when(last step) Start=End Batch 1 Batch 2 Batch 1 Lv2 Partial Reuse FIFO 1 8 7 2 9 3 -7 -5 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1 -2 … 4 -7 9 7 1 7 -1 … 7 -5 3 2 8 … 4 -7 9 7 1 … 7 -5 3 2 8 Batch 1 1 8 7 2 9 3 -5 -7 1 7 9 -7 4 … -2 -1 8 2 3 -5 7 … -1 -7 2 4 6 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 9 3 1 7 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' + = C) D) Lv1 Serial to Parallel Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Different Approaches for Hiding Data Transfer Latency to Improve Throughput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A) Ping-pong buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' B) Synchronous FIFO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' C) The Proposed Patrial Reuse FIFO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' D) A four-level synaptic weights delivery hierarchy to enable synaptic weights reuse, reduce off-chip memory bandwidth and hide the weight loading latency to the systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' size has to be doubled for ping-pong operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, the worst-case scenario will not occur in most cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Only a small portion of the ping-pong buffer is occupied most of the time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' While the aforementioned problems are negligible in ANN accelerator design, we cannot afford to “double the size” in SNN neuromorphic hardware design because the memory bandwidth needed has already increased multiple times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Ideally, the on-chip buffer that stores the synaptic weights in SNN should have the following properties: 1) We do not need to double the buffer size and split the buffer into two regions for ping-pong operation just to guarantee no read-write collision will happen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' No manual switching of the split buffers is needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 2) In SNN, the same synaptic weights need to be accessed at every timestep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We expect the data in the buffer can be read several times before they expire and are replaced by new data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 3) The depth of the buffer is set to support the most storage- expensive cases, but multiple batches of data can be preloaded into the available large RAM spaces when the storage requirements are less expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We propose Partial Reuse FIFO, to address the above requirements and enable data reuse and space exploration without complex control logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 3B, traditional synchronous FIFO can be described using a ring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The circumference of the ring represents the depth of the FIFO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The width of the ring represents the data width of the FIFO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A push pointer is used to mark the write address of the incoming data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A pop pointer is used to mark the read address of the output data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When the push pointer and the pop pointer point to the same address, the FIFO is either full or empty, depending on whether the occupancy of the FIFO is rising or falling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When the FIFO is full, the ready signal to the inputs AXI-Stream is clear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When the FIFO is empty, the valid signal to the outputs AXI-Stream is clear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='3C, the mechanism of the Partial Reuse FIFO is the same as the traditional synchronous FIFO, except that a partial region in the FIFO ring cannot be flushed by incoming data until it is reused T times, where T is a control register of the Partial Reuse FIFO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The reuse region of the FIFO is labeled by Start and End.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The pop pointer jumps back to the Start position whenever it reaches the end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The reuse counter increases whenever the pop pointer jumps back to Start.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The Start label stays the same when the region is still being reused.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When the counter reaches T, the counter is reset, label End becomes the next label Start and the next label End is set by Start+L-1, where L is another control register of the partial reuse FIFO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Unlike the traditional synchronous FIFO, when the push pointer meets the label Start, the Partial Reuse FIFO is full and the ready signal to the inputs AXI-Stream is clear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When label End is ahead of the push pointer, the Partial Reuse FIFO is considered empty until the reuse sector of the FIFO is filled by the input stream.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The partial reuse FIFO satisfies the aforementioned prop- erties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Using the valid-ready handshake protocol of the AXI- Stream, the function of the partial reuse FIFO is self-contained, with only two control registers exposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The partial reuse FIFO contains only a monolithic RAM and does not need to be split.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The push-pop pointer in the FIFO control logic ensures no read-write collision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The reuse sector protected by the Start-End label enables data reuse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' New data from multiple batches can be pushed to the partial reuse FIFO sequentially as long as the FIFO is not full.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Psum-Vmem Unified Buffer and Spike Generation Logic A classic systolic array consumes data from the inputs and weights data domain and feeds data to the outputs data domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' If one data domain stays stationary, the other two must flow through the computing logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' This metric holds for the three classic input, weight and output stationary dataflows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Our architecture adopts the weight stationary dataflow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In this case, synaptic weights remain stationary in the systolic array, and the input binary spikes and the output flow in and out of the systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The flowing spike vector is generated by the line buffer mechanism, and the outputs are stored in the proposed Psum-Vmem Unified Buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In our architecture, the synaptic operations in SNN are spatially parallelized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, it is unlikely to flatten a whole layer spatially onto the area-power-restricted hardware substrates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Therefore, certain tiling strategies need to be imple- mented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We adopt the channel tiling strategy to accommodate layers with a large number of channels to the same systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Input spike map channels are split into multiple tiles to fit into the height of the systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Output spike map 8 Acc Phase Thresh Phase Last Step Last Tile Finish Finish Clear Phase Psum-Vmem Unified Buffer Stored Psum/Vmem Read Addr Psum-Vmem Update Engine Updated Psum/Vmem Updated Psum/Vmem New Psum Acc Unit New Psum Stored Threshold Leak Coeff Leak En Reset En Updated Spike Leak Unit Threshold Unit C) Psum/Vmem Psum/Vmem B) A) Write Addr Spikes Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Psum-Vmem Update Mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A) The finite-state-machine per- forming the Psum-Vmem update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' B) The proposed Psum-Vmem unified buffer and Psum-Vmem update engine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' C) The hardware implementation details of the Psum-Vmem update engine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' channels are calculated N at a time according to the width of the systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In each single timestep, the partial sums of the N output spike map channels are stored on-chip and are not fully accumulated until all tiles of the input spike map channels are calculated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In each layer, the membrane voltage of the N output spike map channels are also needed to be stored on- chip until all timesteps are iterated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Instead of instantiating a separate buffer for partial sum and membrane voltage, we propose the Psum-Vmem Unified Buffer to reduce RAM consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Since tiles of input spike map channels in a single timestep are sent to the computing array one by one and the temporal dimension of SNN is kept in its natural way of executing in a sequential manner, the partial sum accumulating process and the membrane voltage update process can be scheduled using a finite state machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' There are three states specified in the FSM: accumulating phase, thresholding Phase, and clearing phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' During the accumulating phase, Psum extracted from the Psum-Vmem unified buffer is accumulated by the computing results from the systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When the last tile of the input spike map channel in the current timestep arrives and the current timestep is not the last, the FSM switches to the thresholding phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The extracted Psum is first accumulated, then processed by the optional leaky unit and the thresholding unit, and eventually written back to the unified buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The accumulated Vmem will be subtracted from a fixed portion of its value by the optional leaky unit to support the LIF neuron dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The thresholding unit will compare the Vmem with the threshold, generate a spike, and reset the Vmem if it exceeds the threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' All of the computations are pipelined to improve timing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The FSM switches back to the accumulating phase when this phase finishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When the last tile of the input spike map channel in the last timestep arrives, the FSM switches to the Clearing Phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The computation process is the same as the thresholding phase, except that the Vmem value will be cleared to reset the unified buffer for the next SNN layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' IMPLEMENTATION AND EXPERIMENTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Experiments Setup Most neuromorphic hardware uses expensive large FPGA devices, ignoring the feasibility of deploying such hardware in the real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' FireFly is mapped onto several off-the-shelf commercially available Xilinx Zynq Ultrascale FPGAs, in- cluding the Ultra96v2, KV260 and ZCU104 FPGA evaluation boards, bringing hope of SNN real-world applications in an edge scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The FPGA chips of the three evaluation boards are xczu3eg, xczu5ev, and xczu7ev, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Our proposed FireFly is designed using SpinalHDL, a hardware description language equipped with object-oriented programming and functional programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Compared with an HLS-based code template, parameterized Verilog, or Sys- temVerilog, SpinalHDL can offer a higher level of abstraction and reconfigurability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The Verilog codes generated by the SpinalHDL compiler are synthesized and implemented in the Xilinx Vivado 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1 with ML-Based design optimization to achieve a higher clock rate and faster timing closure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Power consumption estimates and timing results are obtained after place-and-route using the power analysis and timing summary tools in the Vivado Design Suite, which provides detailed analysis and accurate estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Throughput performance is obtained by recording the timer value on the PS side of Zynq while the PL runs the benchmark tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Bridging the Gap between Peak and Avg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' GSOP/s The theoretical peak GSOP/s of an SNN accelerator is given as: Peak GSOP/s = 2 × f × M × N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' (5) where f is the system clock frequency, and M ×N denotes the size of the systolic array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The peak GSOP/s calculation is the same as [20] and [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In FireFly, M denotes the number of columns in the systolic array, while N denotes the rows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The peak performance should be proportional to the systolic array size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, the actual throughput, or average GSOP/s, can be degraded due to insufficient bandwidth and inefficient controller design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In our design, the line buffer mechanism enables binary spike map reuse, the partial reuse FIFO enables synaptic weight reuse, and the Psum-Vmem buffer is used to avoid back-and-forth fetch and store.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The memory bandwidth needed for off-chip data transfer is minimized, and thus not a bottle- neck of the system’s average performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We argue that the communication between the controller and the accelerator significantly impact the system’s actual 9 TABLE II COMPARISON WITH OTHER WORKS IN RESOURCE UTILIZATION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Work Device Slice LUTs Slice Registers BRAM/URAM DSP48 Frequency Peak GSOP/s Used Utilization Used Utilization Used Utilization Used Utilization [39] xc7vx690t 53k 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='20% 100k 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='50% 65 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='40% 0 0% 100 / [22] xc7k325t 170k 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='70% 113k 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='70% 254 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='10% 0 0% 135 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='2 [24] xcvu440 302k 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='90% 421k 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='30% 192 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='60% 0 0% 200 1562.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5 [40] xcku115 585k 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='20% 232k 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='40% 432 20% 0 0% 140 253 [25] 28nm ASIC / / / / / / / / 200 684.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5 [31] 28nm ASIC / / / / / / / / 200 3970.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1 ours1 xczu3eg 15k 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='40% 53k 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='50% 162 75% 288 80% 300 1382.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4 ours2 xczu7ev 42k 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='20% 196k 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='60% 25/40 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5/41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='6% 1152 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='60% 300 5529.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='6 ours3 xczu5ev 32k 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='35% 112k 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='86% 16/24 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1%/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5% 576 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='20% 300 1382.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4×2 1 FireFly with a 16 × 144 systolic array implemented on Ultra96v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 2 FireFly with a 32 × 228 systolic array implemented on ZCU104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 3 FireFly with two 16 × 144 systolic arrays implemented on KV260.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' throughput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Note that we choose the Zynq devices as the system platforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The built-in host CPU controller enables fast deployment of different SNN networks without the need to change the PL logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In most Zynq-based SNN acceler- ators such as Cerebron [20], the host program in the Zynq processing system sends synaptic weights and binary input spike maps into the Zynq programmable logic and collects the output spike maps in different SNN layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, the control command sequence traveling between PS and PL through the low-performance AXI-Lite protocol induces non- negligible latency, leaving the systolic array idle and reducing the average throughput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In FireFly, the host program generates a command sequence in advance and sends the commands to PL through a high-performance AXI-Stream to the internal command queue of the AXI DataMover.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In this way, the req- ack waiting clock cycles between commands are eliminated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The average throughput can go a step further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Performance Analysis The size of the systolic can be statically reconfigured in FireFly according to the on-chip resources on different evaluation boards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A M ×N systolic array in FireFly receives N presynaptic inputs and produces partial sum for M neurons, where M = P and N = Kh × Kw × P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The resource con- sumption, memory bandwidth and acceleration performance is linearly proportional to the parallelism factor P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' P can be any value as long as the systolic array can fit in the target device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' As P is also the tiling factor of the input and output channels in a convolutional layer, it is preferable to set P to a power of 2 because the number of channels in most convolutional layers is a power of two.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Therefore, we evaluate two representative configurations, 16 × 144 and 32 × 288 to demonstrate the reconfigurability of FireFly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The usage of DSP48 to implement synaptic operations sig- nificantly reduces the fabric overhead and achieves significant GSOP/s improvements compared with most existing hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The performance of FireFly is still impressive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' FireFly with a 16 × 144 systolic array can achieve a peak performance of 1382.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4GSOP/s, and FireFly with a 32×288 systolic array can achieve a peak performance of 5529.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='6GSOP/s, as shown in Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' To the best of our knowledge, SIES [24] achieves the highest GSOP/s among all the existing FPGA-based acceler- ators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Compared with SIES [24], FireFly mapped on xczu3eg consumes only 1 20 LUTs and 1 8 FFs but still achieve similar GSOP/s, whereas FireFly mapped on xczu7ev consumes only 1 7 LUTs and 1 2 LUTs FFs and achieves a ×3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5 speed up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Additionally, we map two heterogeneous FireFly cores onto xczu5ev to support the concurrent inference of two indepen- dent SNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We can still achieve higher throughput when compared with SpinalFlow and SATO, which are state-of-the-art SNN hardware accelerators built in 28nm ASIC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We are well aware that it is difficult to make an apples-to-apples comparison with the hardware adopting different design methodologies, supporting different types of neurons, using different synaptic weight precisions or implementing on different platforms, FireFly can still be called a high-performance SNN accelerator due to its excellent GSOP/s performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Benchmark Evaluations We deploy several state-of-the-art SNN networks trained by backpropagation algorithms [4] on FireFly to test the inference performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We evaluate not only the static datasets such as MNIST, CIFAR10 and CIFAR100 but also the neuromorphic datasets such as DVS-CIFAR10 and DVS-Gesture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The models are trained using surrogate functions like quadratic gate and arctangent gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Direct coding and backpropagation through time algorithm significantly reduce the total timesteps of the SNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In our experiment, the timesteps are scaled down to four without a significant ac- curacy drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We first apply batchnorm fusion to merge the batch normal- ization layer with the preceding convolutional layer to deploy the Pytorch-Trained SNN model to FireFly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Then we adopt post-training quantization techniques to convert the Float32 synaptic weights to INT8 and the Float32 threshold to INT18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Note that the performance drop of post-training quantization without further retraining or fine-tuning is negligible in SNN because no scaling errors of multiplications are introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' FireFly shows reconfigurability on different SNN models for different image classification tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We evaluate four different SNN model structures with 5, 7, 9, and 11 convolutional 10 TABLE III COMPARISON WITH RELATED WORK FOR MULTIPLE IMAGE CLASSIFICATION TASKS USING SNNS FOR MULTIPLE DATASET.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Work Network Dataset Latency Accuracy GSOP/s Device Frequency power TVLSI’14 [41] 784-500-500-10 MNIST 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='25ms 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='2 / xc6slx150t 75MHz 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5W ICCAD’20 [27] 28x28-32c3-p2-32c3-p2-256-10 MNIST 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='53ms 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='42 / xczu9eg 125MHz 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5W TCAD’22 [30] 28x28-16c-32c-8c-10 MNIST 45us 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='6 xc7z045 200MHz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='96W TCAS-I’21 [42] 784-200-100-10 MNIST 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='15ms 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='93 xc7vx485t 100MHz / JCST’20 [24] 28x28-12c5-p2-64c5-p2-10 MNIST / 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='16 1562.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5 xcvu440 200MHz / TCAD’21 [22] 32x32-32c3-p2-32c3-p2-256-10 SVHN 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='21 ms 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='15 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='2 xc7k325t 100MHz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='699W 784-512-256-128-64-10 FMNIST 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='14 ms 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='01 200MHz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='982W TRETS’22 [43] 28x28-32c3-p2-32c3-p2-256-10 MNIST 77us 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='17 / xczu9eg 200MHz 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='5W 32x32-(192c5-192c1-192c1-p3)*2- 192c5-192c1-10c1-AP-10 CIFAR10 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='8ms 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='19 DATE’22 [26] 144x144-p4-32c-p2- 32c-p2-512-512-11 NMNIST 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='83ms 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='81 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='2 22nm ASIC 400MHz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='11W DVS-Gesture 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1ms 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4 ours SCNN-51 MNIST 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='491ms 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='12% 91%5 xczu3eg 300MHz 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='55W SCNN-72 CIFAR10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='035ms 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='36% 89%5 SCNN-113 CIFAR100 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='125ms 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='28% 86%5 SCNN-94 DVS-CIFAR10 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='541ms 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='40% 87%5 SCNN-94 DVS-Gesture 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='541ms 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='29% 87%5 1 SCNN-5: 28x28-16c3-64c3-p2-128c3-p2-256c3-256c3-10 2 SCNN-7: 32x32-16c3-64c3-p2-128c3-128c3-p2-256c3-256c3-p2-512c3-10 3 SCNN-9: 48x48-16c3-64c3-64c3-p2-128c3-128c3-p2-256c3-256c3-p2-512c3-512c3-10 4 SCNN-11: 32x32-16c3-64c3-64c3-p2-128c3-128c3-128c3-p2-256c3-256c3-256c3-p2-512c3-512c3-100 5 The GSOP/s utilization ratio: Actual measured GSOP/s divided by the peak GSOP/s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The peak GSOP/s is 1382.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='4 on xczu3eg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' layers on five different datasets, shown in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Note that our chosen device, xczu3eg, is an edge device having the fewest resources among all the listed hardware, but still, Fire- Fly shows significant improvement in all these benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Compared with [27], FireFly achieves a ×15 speed up and similar accuracy on the MNIST dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Compared with [21], FireFly achieves higher accuracy and a ×6 inference speed up on CIFAR10 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Compared with ASIC design [26], FireFly achieves a ×2 speed up and similar accuracy on DVS- Gesture dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Note that our SNN models are considerably bigger and deeper than the listed benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When using a larger xczu7ev device, all the inference per- formances listed above are improved by ×4 because xczu7ev supports higher parallelism and has a peak performance of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='523TSOP/s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Our system also supports multiple heteroge- neous cores running different SNN models concurrently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' When targeting xczu5ev, two FireFly cores can be deployed indepen- dently to support multiple real-world tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Discussion We argue that for FPGA-based SNN accelerator design, the benefits of designing complicated hardware supporting spike sparsity may not make up for the losses of irregular interconnect and underutilization of the dedicated hard block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The system clock frequency can have a significant impact on inference performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Compared with ASICs, routing in FPGAs contributes more delay time since logic elements are connected through a series of switching matrices instead of direct physical wires.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A complex digital design with irregular interconnect can easily violate the timing requirements even in the most state-of-the-art FPGA devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Most existing FPGA-based SNN accelerators can only satisfy the timing requirement of at most 200MHz even on the expensive Virtex Ultrascale+ device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' An important aspect of FPGA low-power system design is to utilize the existing dedicated hard block rather than build one from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Implementing the same function using the dedicated hard block in FPGAs usually consumes less energy than using the general fabric counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, most existing FPGA-based SNN accelerators fail to delve into the features provided by the existing dedicated hard block and adopt a no-brainer implementation of spike computation using low-speed fabric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In this paper, FireFly provides a different perspective on designing dedicated neuromorphic hardware for spiking neural networks targeting FPGA devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' We are well aware that it is important to design hardware that supports sparsity acceleration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' However, to our best knowledge, only few studies [25] [31] targeting ASICs can show significant speed-ups considering this inherent nature of SNNs, not to mention the large majority of FPGA-based designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Instead of design- ing complicated circuits to support the sparsity acceleration, FireFly consists of a monolithic systolic array and adopts a straightforward weight stationary dataflow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The acceleration comes from the clock frequency improvement brought by the regular and simple interconnect of the systolic array, the pipelined arithmetic computations, and, most importantly, the flexible use of the multi-function DSP48E2s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In fact, the potential of the DSP48E2 is still far from being fully realized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [11] proposed a high-throughput pro- cessing array for matrix multiplication based on DSP supertile and achieved peak DSP clock rates on Xilinx UltraScale (741 MHz) and UltraScale+ (891 MHz) devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' SNN accelerators can incorporate the DSP supertile design and achieve even higher performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' The potential of other dedicated hard blocks on FPGA is also yet to be exploited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Scaling the Cascades [10] fully utilized the dedicated cascade interconnect of the DSP48E2, BRAM36K, and URAM288K and achieved nearly 100 % usage of these hard blocks, delivering incredible inference speed on MLPerf benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' It is necessary to migrate the existing hardware optimization techniques of ANN accelerator 11 design to SNN neuromorphic hardware research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Nevertheless, we agree that ideally, the main advantage of new SNN accelerators compared to ANNs on digital hard- ware comes primarily from exploiting the sparsity of spikes and not from the replacement of MAC operations with AC operations [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Future neuromorphic hardware design should exploit spike sparsity and migrate existing FPGA optimization techniques simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' CONCLUSIONS In this work, we introduced a high-throughput and recon- figurable hardware accelerator for spiking neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' To achieve high-performance inference of SNN, we fully exploited the features of the dedicated DSP48E2 embedded in the FPGA and achieved the highest GSOP/s compared with the existing accelerator designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' To improve memory efficiency, we designed a synaptic weight delivery hierarchy and a Psum- Vmem unified buffer to support the high parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' To demonstrate FireFly’s reconfigurability, we evaluated multiple deep SNN models on various datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' To make SNN appli- cations more convenient, we used off-the-shelf commercially available FPGA edge devices, offering a more feasible solution than any other existing hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' In the future, we will try to migrate more optimization techniques targeting FPGAs while exploring sparsity acceleration to enable more energy-efficient SNN software and hardware co-design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' REFERENCES [1] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Maass, “Networks of spiking neurons: the third generation of neural network models,” Neural networks, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 10, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1659–1671, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [2] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Deng, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhu, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Shi, “Spatio-temporal backpropa- gation for training high-performance spiking neural networks,” Frontiers in neuroscience, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 12, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 331, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [3] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhang and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, “Temporal spike sequence learning via backpropaga- tion for deep spiking neural networks,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 12 022–12 033, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [4] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Shen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhao, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zeng, “Backpropagation with biologically plausible spatiotemporal adjustment for training deep spiking neural networks,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 3, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 6, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 100522.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Available: https: //www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='com/science/article/pii/S2666389922001192 [5] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Kim and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Panda, “Revisiting batch normalization for training low-latency deep spiking neural networks from scratch,” Frontiers in neuroscience, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1638, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [6] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Deng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Hu, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, “Going deeper with directly-trained larger spiking neural networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 12, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 11 062–11 070.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [7] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Deng, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Xie, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Shi, “Direct training for spiking neural networks: Faster, larger, better,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 33, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 01, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1311–1318.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [8] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Deng, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Liu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Pei, “Exploiting spiking dynamics with spatial-temporal feature normalization in graph learning,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='06865, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [9] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Krishna, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Emer, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Sze, “Eyeriss: An energy- efficient reconfigurable accelerator for deep convolutional neural net- works,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 52, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 127–138, conference Name: IEEE Journal of Solid-State Circuits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [10] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Samajdar, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Garg, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Krishna, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Kapre, “Scaling the cascades: Interconnect-aware FPGA implementation of machine learning prob- lems,” in 2019 29th International Conference on Field Programmable Logic and Applications (FPL), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 342–349, ISSN: 1946-1488.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [11] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Berman, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Cho, “A high-throughput reconfig- urable processing array for neural networks,” in 2017 27th International Conference on Field Programmable Logic and Applications (FPL), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1–4, ISSN: 1946-1488.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [12] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Davies, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Srinivasa, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Lin, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Chinya, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Cao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Choday, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Dimou, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Joshi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Imam, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Jain, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Liao, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Lin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Lines, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Liu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Mathaikutty, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' McCoy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Paul, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Tse, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Venkataramanan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Weng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wild, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, “Loihi: A neuromorphic manycore processor with on-chip learning,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 38, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 82–99, conference Name: IEEE Micro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [13] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Pei, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Deng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Song, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zou, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' He, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Chen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Deng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Ma, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Han, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Xie, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Shi, “Towards artificial general intelligence with hybrid tianjic chip architecture,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 572, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 7767, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 106–111, number: 7767 Publisher: Nature Publishing Group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='com/articles/s41586-019-1424-8 [14] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Painkras, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Plana, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Garside, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Temple, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Galluppi, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Patterson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Lester, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Brown, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Furber, “SpiNNaker: A 1-w 18- core system-on-chip for massively-parallel neural network simulation,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 48, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1943–1953, conference Name: IEEE Journal of Solid- State Circuits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [15] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Akopyan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Sawada, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Cassidy, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Alvarez-Icaza, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Arthur, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Merolla, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Imam, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Nakamura, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Datta, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Nam, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Taba, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Beakes, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Brezzo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Kuang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Manohar, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Risk, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Jackson, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Modha, “TrueNorth: Design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 34, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1537–1557, conference Name: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Schemmel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Br¨uderle, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Gr¨ubl, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Hock, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Meier, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Millner, “A wafer-scale neuromorphic hardware system for large-scale neural modeling,” in 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1947–1950, ISSN: 2158-1525.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [17] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Feldmann, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Youngblood, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wright, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Bhaskaran, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 569, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 7755, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 208–214, number: 7755 Publisher: Nature Publishing Group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='com/articles/s41586-019-1157-8 [18] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhou, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Xie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fang, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Dai, “Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 15, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 367–373, number: 5 Publisher: Nature Publishing Group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='com/articles/s41566-021-00796-w [19] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Ma, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Mao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Ren, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhou, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Han, “Leaky integrate-and-fire neurons based on perovskite memristor for spiking neural networks,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 74, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 104828.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='com/science/ article/pii/S2211285520303852 [20] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Gao, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fu, “Cerebron: A reconfigurable architecture for spatiotemporal sparse spiking neural networks,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 30, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1425–1437, conference Name: IEEE Transactions on Very Large Scale Integration (VLSI) Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [21] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Panchapakesan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, “SyncNN: Evaluating and accel- erating spiking neural networks on FPGAs,” in 2021 31st International Conference on Field-Programmable Logic and Applications (FPL), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 286–293, ISSN: 1946-1488.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [22] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Ye, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Chen, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Liu, “The implementation and optimization of neuromorphic hardware for supporting spiking neural networks with MLP and CNN topologies,” pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1–1, conference Name: IEEE Transac- tions on Computer-Aided Design of Integrated Circuits and Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [23] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Aung, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Qu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Luo, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Goh, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wong, “DeepFire: Acceleration of convolutional spiking neural network on modern field programmable gate arrays,” in 2021 31st International Conference on Field-Programmable Logic and Applications (FPL), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 28–32, ISSN: 1946-1488.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [24] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Deng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Guo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Kang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Guo, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Xu, “SIES: A novel implementation of spiking convolutional neural network inference engine on field-programmable gate array,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 475–489.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1007/s11390-020-9686-z [25] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Narayanan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Taht, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Balasubramonian, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Giacomin, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='- E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Gaillardon, “SpinalFlow: An architecture and dataflow tailored for spiking neural networks,” in 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 349–362.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [26] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Di Mauro, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Prasad, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Huang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Spallanzani, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Conti, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Benini, “SNE: an energy-proportional digital accelerator for sparse event-based convolutions,” in 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 825–830, ISSN: 1558-1101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [27] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Mei, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Shrestha, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Qiu, “Encoding, model, and architecture: Systematic optimization for spiking neural 12 network in FPGAs,” in 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1–9, ISSN: 1558-2434.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [28] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Lee and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, “Reconfigurable dataflow optimization for spatiotem- poral spiking neural computation on systolic array accelerators,” in 2020 IEEE 38th International Conference on Computer Design (ICCD), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 57–64, ISSN: 2576-6996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [29] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Lee, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhang, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, “Parallel time batching: Systolic- array acceleration of sparse spiking neural computation,” in 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 317–330, ISSN: 2378-203X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [30] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Gao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Luan, “Skydiver: A spiking neural network accelerator exploiting spatio-temporal workload balance,” pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1–1, conference Name: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [31] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Liu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Jiang, “SATO: spiking neural network acceleration via temporal-oriented dataflow and architecture,” in Proceedings of the 59th ACM/IEEE Design Automation Conference, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' DAC ’22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Association for Computing Machinery, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1105–1110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1145/3489517.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='3530592 [32] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Izhikevich, “Which model to use for cortical spiking neurons?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' IEEE transactions on neural networks, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 15, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1063–1070, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [33] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Hodgkin and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Huxley, “A quantitative description of mem- brane current and its application to conduction and excitation in nerve,” The Journal of physiology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 117, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 4, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 500, 1952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [34] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Abbott, “Lapicque’s introduction of the integrate-and-fire model neuron (1907),” Brain research bulletin, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 50, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 5-6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 303–304, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [35] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Dayan, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Abbott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=', “Theoretical neuroscience: computational and mathematical modeling of neural systems,” Journal of Cognitive Neuroscience, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 15, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 154–155, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [36] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Shi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Lin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhou, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Liu, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wu, “A cost-efficient high-speed VLSI architecture for spiking convolutional neural network inference using time-step binary spike maps,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 21, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 18, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 6006, number: 18 Publisher: Multidisciplinary Digital Publishing Institute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='mdpi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='com/1424-8220/21/18/6006 [37] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=', “Ultrascale architecture dsp slice user guide,” 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [38] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Bosi, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Bois, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Savaria, “Reconfigurable pipelined 2-d con- volvers for fast digital signal processing,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 299–308, conference Name: IEEE Transactions on Very Large Scale Integration (VLSI) Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [39] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Guo, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Deng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Xie, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Dou, “A systolic SNN inference accelerator and its co-optimized software framework,” in Proceedings of the 2019 on Great Lakes Symposium on VLSI, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' GLSVLSI ’19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Association for Computing Machinery, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 63–68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1145/3299874.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='3317966 [40] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Kuang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Cui, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhong, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Dai, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Yu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Wang, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Huang, “ESSA: Design of a programmable efficient sparse spiking neural network accelerator,” pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1–11, conference Name: IEEE Transactions on Very Large Scale Integration (VLSI) Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [41] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Neil and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Liu, “Minitaur, an event-driven FPGA-based spiking network accelerator,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 22, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 2621–2628, conference Name: IEEE Transactions on Very Large Scale Integration (VLSI) Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [42] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Mao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Xiao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Chang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Zhou, “A fast and energy-efficient SNN processor with adaptive clock/event-driven computation scheme and online learning,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 68, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1543– 1552, conference Name: IEEE Transactions on Circuits and Systems I: Regular Papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [43] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Panchapakesan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Fang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Li, “SyncNN: Evaluating and Accelerating Spiking Neural Networks on FPGAs,” ACM Transactions on Reconfigurable Technology and Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 15, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 48:1– 48:27, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content='1145/3514253 [44] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Dampfhoffer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Mesquida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Valentian, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' Anghel, “Are SNNs really more energy-efficient than ANNs?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' an in-depth hardware-aware study,” pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'} +page_content=' 1–11, conference Name: IEEE Transactions on Emerging Topics in Computational Intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAzT4oBgHgl3EQf8f6B/content/2301.01905v1.pdf'}