- Видео 15
- Просмотров 7 131
ARC SNU
Добавлен 25 мар 2013
Architecture and Code Optimization (ARC) Lab.
Seoul National University (SNU)
Seoul National University (SNU)
[ECCV'24] Frugal 3D Point Cloud Model Training
[ECCV'24] Frugal 3D Point Cloud Model Training via Progressive Near Point Filtering and Fused Aggregation
Abstract
The increasing demand on higher accuracy and the rapid growth of 3D point cloud datasets have led to significantly higher train- ing costs for 3D point cloud models in terms of both computation and memory bandwidth. Despite this, research on reducing this cost is relatively sparse. This paper identifies inefficiencies of unique operations in the 3D point cloud training pipeline: farthest point sampling (FPS) and forward and backward aggregation passes. To address the inefficiencies, we propose novel training optimizations that reduce redundant computation and memory accesses re...
Abstract
The increasing demand on higher accuracy and the rapid growth of 3D point cloud datasets have led to significantly higher train- ing costs for 3D point cloud models in terms of both computation and memory bandwidth. Despite this, research on reducing this cost is relatively sparse. This paper identifies inefficiencies of unique operations in the 3D point cloud training pipeline: farthest point sampling (FPS) and forward and backward aggregation passes. To address the inefficiencies, we propose novel training optimizations that reduce redundant computation and memory accesses re...
Просмотров: 94
Видео
[VLDB'23] WALTZ: Leveraging Zone Append to Tighten the Tail Latency of LSM Tree on ZNS SSD
Просмотров 132Год назад
[VLDB'23] WALTZ: Leveraging Zone Append to Tighten the Tail Latency of LSM Tree on ZNS SSD Abstract We propose WALTZ, an LSM tree-based key-value store on the emerging Zoned Namespace (ZNS) SSD. The key contribution of WALTZ is to leverage the zone append command, which is a recent addition to ZNS SSD specifications, to provide tight tail latency. The long tail latency problem caused by the mer...
[AAAI'23]Not All Neighbors Matter: Point Distribution-Aware Pruning for 3D Point Cloud
Просмотров 327Год назад
This is the talk video presented at AAAI'23. Full paper is available at yjyjlee.github.io/assets/pdf/aaai23_pointcloud.pdf 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁 Applying deep neural networks to 3D point cloud processing has demonstrated a rapid pace of advancement in those domains where 3D geometry information can greatly boost task performance, such as AR/VR, robotics, and autonomous driving. However, as the size of both ...
[VLDB'22] Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine (...)
Просмотров 3262 года назад
- [VLDB'22] Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching Abstract Recently, Graph Neural Networks (GNNs) have been receiving a spotlight as a powerful tool that can effectively serve various inference tasks on graph structured data. As the size of real-world graphs continues to scale, the GNN training system faces a sc...
[DAC'22 ]Effective Zero Compression on ReRAM-based SparseDNN Accelerators
Просмотров 1832 года назад
This is the talk video presented at DAC'22. Full paper is available at arc.snu.ac.kr/pubs/dac22_reram.pdf 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁 For efficient DNN inference Resistive RAM (ReRAM) crossbars have emerged as a promising building block to compute matrix multiplication in an area- and power-efficient manner. To improve inference throughput sparse models can be deployed on the ReRAM-based DNN accelerator. While un...
[HPCA'22 ]ANNA: Specialized Architecture for Approximate Nearest Neighbor Search
Просмотров 7712 года назад
This is the talk video presented at HPCA'22. Full paper is available at arc.snu.ac.kr/pubs/hpca22_anna.pdf 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁 Similarity search or nearest neighbor search is a task of retrieving a set of vectors in the (vector) database that are most similar to the provided query vector. It has been a key kernel for many applications for a long time. However, it is becoming especially more important in r...
[ISCA'21]ELSA: Hardware-Software Co-design for Efficient, Lightweight Self-Attention Mechanism
Просмотров 8143 года назад
This is the talk video presented at ISCA'21. Full paper is available at snu-arc.github.io/pubs/isca21_elsa.pdf 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁 The self-attention mechanism is rapidly emerging as one of the most important key primitives in neural networks (NNs) for its ability to identify the relations within input entities. The self-attention-oriented NN models such as Google Transformer and its variants have establi...
[ASPLOS'21] MERCI: Efficient Embedding Reduction on Commodity Hardware via Sub-query Memoization
Просмотров 6713 года назад
This is the talk video presented at ASPLOS'21. Full paper is available at snu-arc.github.io/pubs/asplos21_merci.pdf 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁 Deep neural networks (DNNs) with embedding layers are widely adopted to capture complex relationships among entities within a dataset. Embedding layers aggregate multiple embeddings-a dense vector used to represent the complicated nature of a data feature- into a single e...
Unlocking Wordline-level Parallelism for Fast Inference on RRAM-based DNN Accelerator
Просмотров 1804 года назад
Unlocking Wordline-level Parallelism for Fast Inference on RRAM-based DNN Accelerator
Graphene: Strong yet Lightweight Row Hammer Protection
Просмотров 2134 года назад
Graphene: Strong yet Lightweight Row Hammer Protection
Genesis: A Hardware Acceleration Framework for Genomic Data Analysis (ISCA 2020)
Просмотров 3514 года назад
Genesis: A Hardware Acceleration Framework for Genomic Data Analysis (ISCA 2020)
A Specialized Architecture for Object Serialization with Applications to Big Data Analytics
Просмотров 2174 года назад
A Specialized Architecture for Object Serialization with Applications to Big Data Analytics
RIGHT demo video
Просмотров 2,2 тыс.11 лет назад
RIGHT : R-Interactive-Graphic-via-HTml Contact : team.rightjs@gmail.com
RIGHT Demo Video
Просмотров 11411 лет назад
RIGHT, R-Interactive-Graphics-via-HTml E-mail : team.rightjs@gmail.com
Thank you for your good presentation :-)
That's amazing! Great job! Thank you so much for your effort!