Valence Labs
Valence Labs
  • Видео 289
  • Просмотров 362 040
Derivative-Free Guidance in Continuous and Discrete Diffusion Models | Xiner Li and Masatoshi Uehara
Portal is the home of the AI for drug discovery community. Join for more details on this talk and to connect with the speakers: portal.valencelabs.com/logg
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences. However, rather than merely generating designs that are natural, we often aim to optimize downstream reward functions while preserving the naturalness of these design spaces. Existing methods for achieving this goal often require ``differentiable'' proxy models (\textit{e.g.}, classifier guidance or DPS) or involve computationally expensive fine-tuning of diffusion models (\textit{e.g.}, classifier-free guidance, RL-based...
Просмотров: 5

Видео

A long-context RNA foundation model for predicting transcriptome architecture | Ali SaberiA long-context RNA foundation model for predicting transcriptome architecture | Ali Saberi
A long-context RNA foundation model for predicting transcriptome architecture | Ali Saberi
Просмотров 2807 дней назад
Portal is the home of the AI for drug discovery community. Join for more details on this talk and to connect with the speakers: portal.valencelabs.com/logg Linking DNA sequence to genomic function remains one of the grand challenges in genetics and genomics. Here, we combine large-scale single-molecule transcriptome sequencing of diverse cancer cell lines with cutting-edge machine learning to b...
Fine-tuning Flow and Diffusion Generative Models | Carles Domingo-EnrichFine-tuning Flow and Diffusion Generative Models | Carles Domingo-Enrich
Fine-tuning Flow and Diffusion Generative Models | Carles Domingo-Enrich
Просмотров 66319 дней назад
Portal is the home of the AI for drug discovery community. Join for more details on this talk and to connect with the speakers: portal.valencelabs.com/logg Dynamical generative models that produce samples through an iterative process, such as Flow Matching and denoising diffusion models, have seen widespread use, but there has not been many theoretically-sound methods for improving these models...
Probabilistic Inference in Language Models via Twisted Sequential Monte | Rob BrekelmansProbabilistic Inference in Language Models via Twisted Sequential Monte | Rob Brekelmans
Probabilistic Inference in Language Models via Twisted Sequential Monte | Rob Brekelmans
Просмотров 78627 дней назад
Portal is the home of the AI for drug discovery community. Join for more details on this talk and to connect with the speakers: portal.valencelabs.com/logg Numerous capability and safety techniques of Large Language Models (LLMs), including RLHF, automated red-teaming, prompt engineering, and infilling, can be cast as sampling from an unnormalized target distribution defined by a given reward o...
MolGPS - A Foundational GNN for Molecular Property PredictionMolGPS - A Foundational GNN for Molecular Property Prediction
MolGPS - A Foundational GNN for Molecular Property Prediction
Просмотров 346Месяц назад
Scaling deep learning models has been at the heart of recent revolutions in language modelling and image generation. Practitioners have observed a strong relationship between model size, dataset size, and performance. However, structure-based architectures such as Graph Neural Networks (GNNs) are yet to show the benefits of scale mainly due to the lower efficiency of sparse operations, large da...
Geometric deep learning framework for de novo genome assembly | Lovro VrčekGeometric deep learning framework for de novo genome assembly | Lovro Vrček
Geometric deep learning framework for de novo genome assembly | Lovro Vrček
Просмотров 558Месяц назад
Portal is the home of the AI for drug discovery community. Join for more details on this talk and to connect with the speakers: portal.valencelabs.com/logg The critical stage of every de novo genome assembler is identifying paths in assembly graphs that correspond to the reconstructed genomic sequences. The existing algorithmic methods struggle with this, primarily due to repetitive regions cau...
An Open MetaGenomic corpus for mixed-modality genomic language modeling | Andre CornmanAn Open MetaGenomic corpus for mixed-modality genomic language modeling | Andre Cornman
An Open MetaGenomic corpus for mixed-modality genomic language modeling | Andre Cornman
Просмотров 386Месяц назад
Portal is the home of the AI for drug discovery community. Join for more details on this talk and to connect with the speakers: portal.valencelabs.com/logg Biological language model performance depends heavily on pretraining data quality, diversity, and size. While metagenomic datasets feature enormous biological diversity, their utilization as pretraining data has been limited due to challenge...
Propensity Score Alignment of Unpaired Multimodal DataPropensity Score Alignment of Unpaired Multimodal Data
Propensity Score Alignment of Unpaired Multimodal Data
Просмотров 388Месяц назад
Multimodal representation learning techniques typically rely on paired samples to learn common representations, but paired samples are challenging to collect in fields such as biology where measurement devices often destroy the samples. This paper presents an approach to address the challenge of aligning unpaired samples across disparate modalities in multimodal representation learning. We draw...
Tokenized and Continuous Embedding Compressions of Protein Sequence and Structure | Amy X. LuTokenized and Continuous Embedding Compressions of Protein Sequence and Structure | Amy X. Lu
Tokenized and Continuous Embedding Compressions of Protein Sequence and Structure | Amy X. Lu
Просмотров 654Месяц назад
Portal is the home of the AI for drug discovery community. Join for more details on this talk and to connect with the speakers: portal.valencelabs.com/logg Existing protein machine learning representations typically model either the sequence or structure distribution, with the other modality implicit. The latent space of sequence-to-structure prediction models such as ESMFold represents the joi...
Discrete Flow Matching | Andrew CampbellDiscrete Flow Matching | Andrew Campbell
Discrete Flow Matching | Andrew Campbell
Просмотров 9492 месяца назад
Portal is the home of the AI for drug discovery community. Join for more details on this talk and to connect with the speakers: portal.valencelabs.com/logg Despite Flow Matching and diffusion models having emerged as powerful generative paradigms for continuous variables such as images and videos, their application to high-dimensional discrete data, such as language, is still limited. In this w...
Day 5 - Introducing Bioptic | Vlad VinogravDay 5 - Introducing Bioptic | Vlad Vinograv
Day 5 - Introducing Bioptic | Vlad Vinograv
Просмотров 2022 месяца назад
Join Portal to connect with the speakers: portal.valencelabs.com/ This is a recording from the 2024 Machine Learning for Drug Discovery Summer School hosted at Mila. Speakers: Vlad Vinograv
Day 5 - Hackathon Introduction | Cas WognumDay 5 - Hackathon Introduction | Cas Wognum
Day 5 - Hackathon Introduction | Cas Wognum
Просмотров 1202 месяца назад
Join Portal to connect with the speakers: portal.valencelabs.com/ This is a recording from the 2024 Machine Learning for Drug Discovery Summer School hosted at Mila. Speakers: Cas Wognum
Day 5 - Open-Source Initiatives & Benchmarking Efforts | Karmen Condic-JurkicDay 5 - Open-Source Initiatives & Benchmarking Efforts | Karmen Condic-Jurkic
Day 5 - Open-Source Initiatives & Benchmarking Efforts | Karmen Condic-Jurkic
Просмотров 1782 месяца назад
Join Portal to connect with the speakers: portal.valencelabs.com/ This is a recording from the 2024 Machine Learning for Drug Discovery Summer School hosted at Mila. Speakers: Karmen Condic-Jurkic
Day 5 - Protein Folding & Design | Alex TongDay 5 - Protein Folding & Design | Alex Tong
Day 5 - Protein Folding & Design | Alex Tong
Просмотров 6202 месяца назад
Join Portal to connect with the speakers: portal.valencelabs.com/ This is a recording from the 2024 Machine Learning for Drug Discovery Summer School hosted at Mila. Speakers: Alex Tong
Day 5 - LLMs in Drug Discovery | Andres M BranDay 5 - LLMs in Drug Discovery | Andres M Bran
Day 5 - LLMs in Drug Discovery | Andres M Bran
Просмотров 5012 месяца назад
Join Portal to connect with the speakers: portal.valencelabs.com/ This is a recording from the 2024 Machine Learning for Drug Discovery Summer School hosted at Mila. Speakers: Andres M Bran

Комментарии

  • @JanKowalski-st6sj
    @JanKowalski-st6sj 23 часа назад

    really cool shirt!

  • @eduardocesargarridomerchan5326
    @eduardocesargarridomerchan5326 5 дней назад

    Tutorial de Kolmogorov Arnold networks en castellano: ruclips.net/video/Jb9wMCPUlnc/видео.html

  • @oswack
    @oswack 8 дней назад

    So based on my understanding, this model's usability is hitched on the assumption that one has a "perfect" mapping function whereby no information is lost when applying Kolmogorov's theory to return the 1D edges? Because that in itself can be extremely difficult, even a near approximation.

  • @juniornicolodi
    @juniornicolodi 14 дней назад

    Thank you a lot for the content! Great video!

  • @godzikc
    @godzikc 26 дней назад

    Great job @AnneCarpenter

  • @t0s0jain
    @t0s0jain 29 дней назад

    Can you explain why you used a Ca-Ca prior of 3.8 A for this model where you are denoising Cb coordinates?

  • @Gaib_al_lisan
    @Gaib_al_lisan Месяц назад

    What I am not able to fully explain/justify is the accuracy schedule. I understand the existence on an accuracy altogether, since we chose to work with distributions across the entire workflow. I speculate that increasing the accuracy has to do with the "smoothness"/stability of training; if we were to start with a very informative message that the sender transmits to the receiver, then the update step would he very large, propelling the theta parameters to high values. This is just my intuition. It's not explained in the paper

  • @robertobruzzese8830
    @robertobruzzese8830 Месяц назад

    i am interested in this subject of drug discovery! fine

  • @aminamoudjar4561
    @aminamoudjar4561 Месяц назад

    I’de like to join this lab as phD student

  • @ShiqianTan
    @ShiqianTan Месяц назад

    Very touch and informative

  • @agranero6
    @agranero6 Месяц назад

    I didn't expect to se a mention to Jone polynomials...last time I talked about that was...well in the 80s.

  • @jaimehibbard2114
    @jaimehibbard2114 Месяц назад

    This is super helpful! I have a molecular biology background, and this talk explained ML for molecular representation and scoring in a very accessible way.

  • @shoubhikdasguptadg9911
    @shoubhikdasguptadg9911 2 месяца назад

    I have been following Bharath for some time on his Discord channel. This talk makes me want to fly across continents to meet him in person and thank him for sharing his knowledge with the community.

  • @benzoutandy8435
    @benzoutandy8435 2 месяца назад

    cool

  • @zapy422
    @zapy422 2 месяца назад

    thank you

  • @davidlearnforus
    @davidlearnforus 2 месяца назад

    Thanks a lot! Based on my current understanding I would say quickest starting mental image for me to understand BFN's would be telling me that it is a standard NN with Bayesian teacher forcing.

  • @julienblanchon6082
    @julienblanchon6082 2 месяца назад

    This Hannes Stärk guys is a legend for organizing all this

  • @Ammz1010
    @Ammz1010 2 месяца назад

    How can i join

  • @Harirtaylorversion
    @Harirtaylorversion 2 месяца назад

    How can we join research position in the lab?

  • @mpetersen4823
    @mpetersen4823 2 месяца назад

    Thanks, Boyuan! Really interesting work!

  • @yuguo4786
    @yuguo4786 2 месяца назад

    interesting topic!

  • @alexiscao8749
    @alexiscao8749 2 месяца назад

    Mathew sounds almost identical to the biologist Mike Levin from Tufts

  • @vipulverma3640
    @vipulverma3640 2 месяца назад

    Can we use this for time series to forecast the future value ?

  • @wangchentong5623
    @wangchentong5623 2 месяца назад

    Thank you for making this, it will be better if there’s less ad. There’s at least 4-5 through this video

  • @rylieweaver1516
    @rylieweaver1516 2 месяца назад

    Great conversation, thank you!

  • @rylieweaver1516
    @rylieweaver1516 2 месяца назад

    Could you elaborate more on how the allegro model allows for more parallelization? You say that there is no message-passing, but then indicate that there are layers for tensor products. I’m curious what the layers for tensor products are for if there isn’t message-passing?

  • @rylieweaver1516
    @rylieweaver1516 2 месяца назад

    Great research, thank you for the presentation!

  • @marcelhedman7391
    @marcelhedman7391 2 месяца назад

    Great video

  • @marcelhedman7391
    @marcelhedman7391 2 месяца назад

    Great video

  • @jihochoi_cs
    @jihochoi_cs 2 месяца назад

    Thank you!

  • @djalu-wahyu
    @djalu-wahyu 2 месяца назад

    the formula at screen 's right corner , can't read

  • @benzoutandy8435
    @benzoutandy8435 3 месяца назад

    cool

  • @ntej7927
    @ntej7927 3 месяца назад

    Interesting.......Thanks.

  • @johndewey7243
    @johndewey7243 3 месяца назад

    All of these posts are solid platinum. Keep it all coming!

  • @drvanon
    @drvanon 3 месяца назад

    Martin is having a lot of fun in the background 😂

  • @luke.perkin.inventor
    @luke.perkin.inventor 3 месяца назад

    I really enjoyed this, strong generalisation! The previous papers on bias free CNNs are also really good, there's a few good talks given by Eero Simoncelli on youtube. Do you think you could do a talk on some follow up papers? It feels like something in this space might be a "Deep Image Prior" moment for 2024! That's cited 3000+ times now.

  • @yes4653
    @yes4653 3 месяца назад

    Very useful ❤ and nice explanation @Nicholas Gao

  • @LaiannePlácidoLima
    @LaiannePlácidoLima 3 месяца назад

    vícios de linguagem e um raciocínio descontínuo.

  • @jakubpodsiadlikowski1897
    @jakubpodsiadlikowski1897 3 месяца назад

    Why am i seeing this?

  • @Pingu_astrocat21
    @Pingu_astrocat21 3 месяца назад

    Was waiting for this thankss a lot!!

  • @fuadalabir9453
    @fuadalabir9453 3 месяца назад

    Starts at 56:34

  • @chaitjo
    @chaitjo 4 месяца назад

    Thanks for listening! Slides and links to the paper/code/etc are on my website!

    • @robertobruzzese8830
      @robertobruzzese8830 Месяц назад

      which website ?

    • @chaitjo
      @chaitjo 18 дней назад

      @@robertobruzzese8830 My personal website, which you can find by googling me or on my youtube page. Somehow, I am not being allowed to comment the link to the slides on this youtube video - sorry about that.

  • @Fun-bz7ou
    @Fun-bz7ou 4 месяца назад

    John Snow?

  • @vegedog-ro1ce
    @vegedog-ro1ce 4 месяца назад

    Thank you very much for your video. I still have some doubts, is the KAN network suitable for multiple outputs?

  • @Kevin.Kawchak
    @Kevin.Kawchak 4 месяца назад

    Thank you for the discussion.

  • @Pingu_astrocat21
    @Pingu_astrocat21 4 месяца назад

    Thank you for uploading!!

  • @nicolasg.b.1728
    @nicolasg.b.1728 4 месяца назад

    Hey, where can I find the animation at @00:28 ?

  • @caiodaumann6728
    @caiodaumann6728 4 месяца назад

    One question I have is, are these flows monotonically increasing? The usual "block" flows have this nice property, but do these continuous flows trained with flow matching also have this property in the transformations from base to data?

  • @araldjean-charles3924
    @araldjean-charles3924 4 месяца назад

    Are we talking here about a general representation theory? Are b-splines the only basis set that can be used? What about wavelets, Fourier series, etc.?

    • @mrpocock
      @mrpocock 4 месяца назад

      People are now experimenting with other curves. Radial basis functions seems to be a low cost drop in for splines. But people are using fourier or wavelet functions, for example, which are not at all splines.

  • @spencerfunk6697
    @spencerfunk6697 4 месяца назад

    exactly 10% of your subs liked