Scalable Parallel Computing Lab, SPCL @ ETH Zurich
Scalable Parallel Computing Lab, SPCL @ ETH Zurich
  • Видео 210
  • Просмотров 134 408
How to find Relevant Items using Approximate Nearest Neighbor Search
We motivate the problem of nearest neighbor search, and we discuss exact and approximate algorithms to solve this problem.
Timestamps:
00:00: Introduction
00:14: Motivation
03:01: KD-Tree
08:06: HNSW
13:05: IVF-PQ
20:10: Comparison
21:09: Conclusion
Просмотров: 124

Видео

Exascale Cloud Computing - A Foggy Tale of Networks, AI, Containers, and Ultra Ethernet
Просмотров 30514 дней назад
Torsten Hoefler's talk presented at the Salishan 2024 meeting featuring Acceleration as a Service (XaaS), Datacenter and HPC network convergence, performance studies of networking across many datacenter providers, network noise analyses, latency sensitivity, and Ultra Ethernet news.
Swing: Short-cutting Rings for Higher Bandwidth Allreduce
Просмотров 1323 месяца назад
Paper Title: Swing: Short-cutting Rings for Higher Bandwidth Allreduce Conference: NSDI 2024 Speaker: Daniele De Sensi Authors: Daniele De Sensi, Tommaso Bonato, David Saam, Torsten Hoefler Abstract: The allreduce collective operation accounts for a significant fraction of the runtime of workloads running on distributed systems. One factor determining its performance is the distance between com...
Neural Graph Databases
Просмотров 1234 месяца назад
Paper Title: Neural Graph Databases Conference: First Learning on Graphs Conference (LoG'22) Speaker: Maciej Besta Authors: Maciej Besta, Patrick Iff, Florian Scheidl, Kazuki Osawa, Nikoli Dryden, Michal Podstawski, Tiancheng Chen, Torsten Hoefler Abstract: Graph databases (GDBs) enable processing and analysis of unstructured, complex, rich, and usually vast graph datasets. Despite the large si...
HOT - Higher-Order Dynamic Graph Representation Learning with Efficient Transformers
Просмотров 1144 месяца назад
Paper Title: HOT - Higher-Order Dynamic Graph Representation Learning with Efficient Transformers Conference: Second Learning on Graphs Conference (LoG'23) Speaker: Maciej Besta Authors: Maciej Besta, Afonso Claudino Catarino, Lukas Gianinazzi, Nils Blach, Piotr Nyczyk, Hubert Niewiadomski, Torsten Hoefler Abstract: Many graph representation learning (GRL) problems are dynamic, with millions of...
LRSCwait: Enabling Scalable and Efficient Synchronization in Manycore Systems
Просмотров 944 месяца назад
Paper Title: LRSCwait: Enabling Scalable and Efficient Synchronization in Manycore Systems through Polling-Free and Retry-Free Operation Conference: Design, Automation and Test in Europe Conference (DATE 2024) Speaker: Samuel Riedel Authors: Samuel Riedel, Marc Gantenbein, Alessandro Ottaviano, Torsten Hoefler, Luca Benini Abstract: Extensive polling in shared-memory manycore systems can lead t...
Compressing Multidimensional Weather and Climate Data Into Neural Networks
Просмотров 1345 месяцев назад
Title: Compressing multidimensional weather and climate data into neural networks Speaker: Langwen Huang Author: Langwen Huang, Torsten Hoefler Abstract: Weather and climate simulations produce petabytes of high-resolution data that are later analyzed by researchers in order to understand climate change or severe weather. We propose a new method of compressing this multidimensional weather and ...
VENOM: A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores
Просмотров 5995 месяцев назад
Paper Title: VENOM: A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores Venue: International Conference for High Performance Computing, Networking, Storage, and Analysis (#SC23) Speaker: Roberto L. Castro Authors: Roberto L. Castro, Andrei Ivanov, Diego Andrade, Tal Ben-Nun, Basilio B. Fraguela, Torsten Hoefler Abstract: The increasing success and scaling of Deep Learning mo...
Motif Prediction with Graph Neural Networks
Просмотров 2976 месяцев назад
Paper Title: Motif Prediction with Graph Neural Networks Conference: 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'22) Speaker: Maciej Besta Authors: Maciej Besta, Raphael Grob, Cesare Miglioli, Nicola Bernold, Grzegorz Kwaśniewski, Gabriel Gjini, Raghavendra Kanakagiri, Saleh Ashkboos, Lukas Gianinazzi, Nikoli Dryden, Torsten Hoefler Abstract: Link prediction is one of...
Demystifying Chains, Trees, and Graphs of Thoughts
Просмотров 2546 месяцев назад
Paper Title: Demystifying Chains, Trees, and Graphs of Thoughts Speaker: Maciej Besta Authors: Maciej Besta, Florim Memedi, Zhenyu Zhang, Robert Gerstenberger, Guangyuan Piao, Nils Blach, Piotr Nyczyk, Marcin Copik, Grzegorz Kwaśniewski, Jürgen Müller, Lukas Gianinazzi, Ales Kubicek, Hubert Niewiadomski, Aidan O'Mahony, Onur Mutlu, Torsten Hoefler Abstract: The field of natural language process...
[SPCL_Bcast] The digital revolution of Earth system modelling
Просмотров 1656 месяцев назад
Speaker: Peter Dueben Venue: SPCL_Bcast #47, recorded on 4th April, 2024 Abstract: This talk outlines three revolutions that happened in Earth system modelling in the past decades. The quiet revolution has leveraged better observations and more compute power to allow for constant improvements of prediction quality of the last decades, the digital revolution has enabled us to perform km-scale si...
[SPCL_Bcast] Capturing Computation with Algorithmic Alignment
Просмотров 1806 месяцев назад
Speaker: Petar Veličković Venue: SPCL_Bcast #46, recorded on 21st March, 2024 Abstract: What makes a neural network better, or worse, at fitting certain tasks? This question is arguably at the heart of neural network architecture design, and it is remarkably hard to answer rigorously. Over the past few years, there have been a plethora of attempts, using various facets of advanced mathematics, ...
Co-design Hardware and Algorithm for Vector Search
Просмотров 2746 месяцев назад
Paper Title: Co-design Hardware and Algorithm for Vector Search Venue: SC'23, Denver CO Speaker: Wenqi Jiang Authors: Wenqi Jiang, Shigang Li, Yu Zhu, Johannes de Fine Licht, Zhenhao He, Runbin Shi, Cedric Renggli, Shuai Zhang, Theodoros Rekatsinas, Torsten Hoefler, Gustavo Alonso Abstract: Vector search has emerged as the foundation for large-scale information retrieval and machine learning sy...
Demystifying Graph Databases
Просмотров 1306 месяцев назад
Paper Title: Demystifying Graph Databases: Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries Journal: ACM Computing Surveys Speaker: Maciej Besta Authors: Maciej Besta, Robert Gerstenberger, Emanuel Peter, Marc Fischer, Michał Podstawski, Claude Barthels, Gustavo Alonso, Torsten Hoefler Abstract: Graph processing has become an important part of multiple areas of comp...
Fortran is dead - Long live Fortran!
Просмотров 1,6 тыс.7 месяцев назад
Torsten Hoefler's random access spontaneous talk given at the 42nd anniversary Salishan Conference on High-Speed Computing in 2023. Discusses how to lift Fortran code to a data-centric representation to optimize it for accelerator devices. Work led by Alexandru Calotoiu in SPCL.
Hot Interconnects - EtherNET: the present and future of datacenter and supercomputers
Просмотров 3428 месяцев назад
Hot Interconnects - EtherNET: the present and future of datacenter and supercomputers
[SPCL_Bcast] Can I Cook a 5 o'clock Compiler Cake and Eat It at 2?
Просмотров 2209 месяцев назад
[SPCL_Bcast] Can I Cook a 5 o'clock Compiler Cake and Eat It at 2?
AI-Driven Performance Metaprogramming
Просмотров 5369 месяцев назад
AI-Driven Performance Metaprogramming
HammingMesh: A Network Topology for Large-Scale Deep Learning
Просмотров 6329 месяцев назад
HammingMesh: A Network Topology for Large-Scale Deep Learning
GDI: Scaling Online Transactional and Analytical Graph Workloads to Hundreds of Thousands of Cores
Просмотров 11911 месяцев назад
GDI: Scaling Online Transactional and Analytical Graph Workloads to Hundreds of Thousands of Cores
[SPCL_Bcast] Scalable Graph Machine Learning
Просмотров 17111 месяцев назад
[SPCL_Bcast] Scalable Graph Machine Learning
[SPCL_Bcast] Heterogeneous multi-core systems for efficient EdgeML
Просмотров 317Год назад
[SPCL_Bcast] Heterogeneous multi-core systems for efficient EdgeML
[SPCL_Bcast] Evaluating Large-Scale Learning Systems
Просмотров 248Год назад
[SPCL_Bcast] Evaluating Large-Scale Learning Systems
ML for High-Performance Climate: Data Post Processing, Compression, and Earth Virtualization Engines
Просмотров 517Год назад
ML for High-Performance Climate: Data Post Processing, Compression, and Earth Virtualization Engines
HexaMesh: Scaling to Hundreds of Chiplets with an Optimized Chiplet Arrangement
Просмотров 447Год назад
HexaMesh: Scaling to Hundreds of Chiplets with an Optimized Chiplet Arrangement
How to Adjust Network-on-Chip Topologies to Design Goals and Architectures
Просмотров 971Год назад
How to Adjust Network-on-Chip Topologies to Design Goals and Architectures
Noise in the Clouds: Influence of Network Performance Variability on Application Scalability
Просмотров 219Год назад
Noise in the Clouds: Influence of Network Performance Variability on Application Scalability
Scheduling Task Graphs on Dataflow Architectures
Просмотров 361Год назад
Scheduling Task Graphs on Dataflow Architectures
Bjorn Stevens on Earth Virtualization Engines (EVE)
Просмотров 1,1 тыс.Год назад
Bjorn Stevens on Earth Virtualization Engines (EVE)
"From Two Strong Oxen to Billions of Fleas." Torsten Hoefler's Sidney Fernbach Award Lecture at SC22
Просмотров 365Год назад
"From Two Strong Oxen to Billions of Fleas." Torsten Hoefler's Sidney Fernbach Award Lecture at SC22

Комментарии

  • @patrickkearney1577
    @patrickkearney1577 2 месяца назад

    First two not verbatim quotes, real programmers can write FORTRAN code in any language and computer scientists solve yesterday's problems with tomorrow's hardware. i am as old as FORTRAN and have coded in FORTRAN, C, APL ALGOL, Forth, LISP, Basic, Pascal, Mathematica, MATLAB, various scripting languages ,... even run scientific computation on laser printers overnight using POSTSCRIPT. Also I have written real time operating systems in C and assembly language. I firmly believe that the design philosophy of modern computer languages became decoupled from consideration of current and future hardware capabilities and the more general resources available prior and during execution of a program. Also operating system support for client processes is mostly very poor. For example, parallel code execution was not possible on early computers. Modern vector processors, FPGAs or multi core CPUs can handle concurrent parallel computation but efficient formalized code design is sorely lacking.

  • @mikgigs
    @mikgigs 2 месяца назад

    everything is nice, super-duper, but show a software example that uses AIE for..AI, a code that starts from c/c++ level....no FIR filter, no RGB conversion, but really something related to AI...show at least something!

  • @jameschums
    @jameschums 2 месяца назад

    thank you for introducing some concepts i had not really considered before. will future compiler mix languages and optimise codes to use different hardware for operations. great talk,thank you, now i am thinking AI tools could optimise fortran, c/cuda, python... ?

    • @pichulinojitoojete7387
      @pichulinojitoojete7387 26 дней назад

      dicho eso, porque no hacer un codigo de inteligencia artificial en fortran, buen reto para pasar el rato

  • @abhinavghosh725
    @abhinavghosh725 4 месяца назад

    is this planned to be released in a general purpose release/integration with current kafka versions. Is this usable for production use-cases or is it still under some testing?

  • @maryamsamami6974
    @maryamsamami6974 5 месяцев назад

    Dear Mr. Hoefler! Thanks for offering the useful video. May I ask if you could please share the slide of the video with me?

  • @ChrisPollitt
    @ChrisPollitt 5 месяцев назад

    TIOBE Index for May 2024: Fortran in the top 10

  • @Machineman2500
    @Machineman2500 6 месяцев назад

    Fortran is still used today, particularly in scientific, engineering, and high-performance computing applications where numerical computation and performance are critical. While newer languages like Python and Julia have gained popularity for general-purpose programming and rapid prototyping, Fortran remains widely used in fields such as computational physics, climate modeling, computational chemistry, and finite element analysis

  • @FindecanorNotGmail
    @FindecanorNotGmail 6 месяцев назад

    Correction: 33.45. When he says, "4 ecks speed up", he means "four _times_ "

  • @simonpeter9617
    @simonpeter9617 6 месяцев назад

    good work

  • @ΜιχαήλΣάπκας
    @ΜιχαήλΣάπκας 7 месяцев назад

    i really doubt you can run yolo on versal :P

  • @kamertonaudiophileplayer847
    @kamertonaudiophileplayer847 7 месяцев назад

    My friend claimed that he can program in Fortran everything. How it is true! I also converted many original weather model calculations from Algol to Fortran. They work great.

  • @superkaran20
    @superkaran20 8 месяцев назад

    Thank you so much, it was very helpful.

  • @vinayakkesharwani7769
    @vinayakkesharwani7769 8 месяцев назад

    great explaination, this video saved my hours of spending in understanding it from docs.

  • @backToFreedom
    @backToFreedom 8 месяцев назад

    the sound is terrible! Fix it if you want to listened

  • @HarishNarayanan
    @HarishNarayanan 8 месяцев назад

    Thank you very much for this talk, and especially for providing context for where it sits in the field.

  • @bhamadicharef
    @bhamadicharef Год назад

    Excellent presentation ... the AI Engine (AIE) looks great !

  • @mar-xpro
    @mar-xpro Год назад

    Nice talk! Super interesting to see this DL-HPC double perspective. Btw the email address of the website mentioned at 57:55 for hiring seems unreacheable. The emails sent are bounced back after 2-3 days without being delivered.

  • @sanaulislam2354
    @sanaulislam2354 Год назад

    Which software did u used for designing

    • @spcl
      @spcl Год назад

      All results are obtained using our custom NoC cost and performance prediction toolchain (see spclgitlab.ethz.ch/iffp1/sparse_hamming_graph_public ) - Does this answer your question?

  • @infinite-saaswath
    @infinite-saaswath Год назад

    Great stuff!

  • @Reskareth
    @Reskareth Год назад

    But what happens when one node has multiple connected nodes which have a lower ID. Then one node would need to point to multiple other nodes. What am I missing?

  • @serpantleo8490
    @serpantleo8490 Год назад

    Very good paper, love from 🇨🇳

  • @kipropcollins4220
    @kipropcollins4220 Год назад

    wasn't there an interesting way to deliver this? i mean, seriously?

  • @bobl557
    @bobl557 Год назад

    Most of the paths in a large system are 4 hops long, not three. In the 545 group example he uses, there is only one link between each global group. So, the first two hops get you to the correct global bus. The next two hops get you to the terminal switch. The largest system that would have a three hop maximum is 9,216 comprising 18 groups.

    • @danieledesensi5532
      @danieledesensi5532 Год назад

      Hops are counted as switch-to-switch hops. Switches within a group are fully connected, thus you need in the worst case one hop to reach another switch in the source group, one hop to reach the destination group, and one hop in the destination group to reach the destination switch.

  • @congchuatocmay4837
    @congchuatocmay4837 Год назад

    Well, yeh, you are really not interested DDL.

  • @darrenjefferson6492
    @darrenjefferson6492 Год назад

    Nice one pal 😀!! Get rapid results > 'promosm' .

  • @howwway4999
    @howwway4999 2 года назад

    That's really cool, hope I can get an opportunity for the possible PhD postion in your lab😃

  • @mprone
    @mprone 2 года назад

    Is there any open PhD position at ETH on these topics ?

    • @spcl
      @spcl 2 года назад

      Yes, see spcl.inf.ethz.ch/Jobs/

    • @elliot2456
      @elliot2456 2 года назад

      @@spcl is that a PhD position or a job for phd students ? why does it say "Contracts will be 12-month renewable, with an initial probatory period" ? I thought PhD were supposed to last at least 3 years.

  • @Qmpi
    @Qmpi 2 года назад

    and where am I?

  • @wassimmuna
    @wassimmuna 2 года назад

    Society 5.0 ... Is that where we finally get a for-loop to iterate through the entire population to serve every inhabitant's needs and desires, instead of passing policies in a top-down approach and wondering why there are still dissatisfied people left behind somehow... or is this going to be another instance of promising technology that only entrenches preexisting distributions of security, opportunities, comforts and luxuries. And for the record, karma is illegal vigilantism. Most promising technology starts with idealistic intentions and ends up being misused to dish out varying degrees of harm. Pardon me if I don't understand why my for-loop hasn't already been implemented on a 486. Maybe that'll be Society 6.0. But obviously, great work by the researchers. Let's just hope the decision-makers live up to the same standard of effort and quality of intent.

  • @shikharjain3536
    @shikharjain3536 2 года назад

    What is the difference between a program dependence graph[by Ferrante & ottenstein] and contextual flow graph?

  • @prithvivelicheti287
    @prithvivelicheti287 2 года назад

    Insightful

  • @zeyuli3258
    @zeyuli3258 2 года назад

    Could you please upload your source code again?It seems to be 404 now:(

    • @spcl
      @spcl 2 года назад

      The code has been released few minutes after your message. Please, check again. Thanks!

  • @vedanshverma6854
    @vedanshverma6854 2 года назад

    The best tutorial to get idea of how cool actual programming is for hpc using hls fpga

  • @kowsalyas5259
    @kowsalyas5259 2 года назад

    Y f

  • @alle9ro
    @alle9ro 2 года назад

    she is amazing

  • @wolfgangmitterbaur3942
    @wolfgangmitterbaur3942 2 года назад

    Good day Mr. Hoefler, a very good and extensive overview of this huge topic. Thanks a lot.

  • @qwmp
    @qwmp 2 года назад

    This is just truly a great gem!

  • @sanjeewaweerage9407
    @sanjeewaweerage9407 2 года назад

    can I have this ppt?

  • @paulthompson9668
    @paulthompson9668 2 года назад

    This is very informative content, but you need to slow down because you end up mispronouncing words at times.

  • @oscarsandoval9870
    @oscarsandoval9870 2 года назад

    Excellent review of the state of the art, well explained and concise, thank you Torsten!

  • @hitmanonstadia1784
    @hitmanonstadia1784 2 года назад

    Nice slides! However the speaker speaks too fast like a rapper, leaves me with painful headaches after the talk. : ((((

  • @byliu5200
    @byliu5200 2 года назад

    Very helpful! Thank you!

  • @alexxx4434
    @alexxx4434 2 года назад

    Very nice presentation

  • @SandipJadhavcctech
    @SandipJadhavcctech 3 года назад

    Thanks a ton. Very helpful 👌

  • @ayushchaturvedi5203
    @ayushchaturvedi5203 3 года назад

    where can i find the slides of this presentation

  • @hossamfadeel
    @hossamfadeel 3 года назад

    Thanks for your efforts.

  • @zachariasfisches7018
    @zachariasfisches7018 3 года назад

    Great presentation!

  • @shihlien
    @shihlien 3 года назад

    GPT-2 model memory will saturate one of the WSC SRAM, right?

  • @spcl
    @spcl 3 года назад

    At 1:50 Prof. Hoefler says we will not use linearizability in the lecture. To clarify: We do not use linearizability in this lecture, but we will introduce linearizability in a later lecture.

  • @hoaxuan7074
    @hoaxuan7074 3 года назад

    You can put a random projection before a sparse neural network. This shares out all the information everywhere in the input evenly. Then each sparse dot product gets a fair sub-sample of the input vector. A more structured sub-random projection could be better.