How Super Resolution Works

Поделиться
HTML-код
  • Опубликовано: 29 дек 2024

Комментарии • 90

  • @utkarshdeshmukh3268
    @utkarshdeshmukh3268 5 лет назад +73

    This is one of the best videos I have seen so far related to super resolution. It explains a fantastic summary of many papers. Great work Leo!

  • @harshitpandey638
    @harshitpandey638 5 лет назад +5

    I think I hit a gold mine by finding this channel

  • @kartikpodugu
    @kartikpodugu Год назад

    I saw this video a year back with limited deep learning knowledge, but still understood the concepts at a higher level. I understood that super resolution is possible with deep neural networks.
    Now watching this video with more knowledge about different deep neural network architectures, gave me more understanding.

  • @user-maymay2002
    @user-maymay2002 8 месяцев назад

    damn, one of the best SRGAN & ECRGAN explanation vids out there! It was straight to the point and the way of explaining was flawless. Thanks!

  • @poshko41
    @poshko41 3 года назад +1

    The amazing thing is that this is only going to keep getting better as more people use it.

  • @dvirzag
    @dvirzag Год назад

    One of the greatest, combined many courses with no realism to a daily operations using all of them theorems. Well done! :)

  • @tazmeenfatima98
    @tazmeenfatima98 4 года назад +2

    Wow, love how you are explaining what techniques and what papers use them, along with excellent details of super-resolution its self.

  • @myrainys
    @myrainys 3 года назад +1

    Give this man a medal.

  • @_RMSG_
    @_RMSG_ 2 года назад

    4:16 I think it does, if the hair is moving in the wrong direction, and the picture is sharp enough, it will look distinctly machine-generated, so perceptually it does matter, but not _necessarily_ for noise reduction

  • @emresafter8629
    @emresafter8629 3 года назад

    Just rushed into the topic and got scared by dense papers. Luckily I found this video and I feel much more familiar with the concept. Thank you for great summary and explanation!

  • @bhaveshgohel
    @bhaveshgohel 5 лет назад +4

    'Super' video of Super Resolution. 👌

  • @Dottor_J
    @Dottor_J 5 лет назад +2

    Your videos are always very interesting, thanks Leo

  • @TheAkbar1000
    @TheAkbar1000 5 лет назад +1

    Thank you for this extensive review paper style video... It really helped me and I am sure it will also help every other undergrad like me who is looking to attempt Super Resolution. I have studied some papers online and was trying to figure out their place on the timeline and what are the current state-of-the-art architectures. I had also planned to investigate how all of this related to the famous Google Computational Photography but you have provided a fine overview...
    Thanks for the links... I will be looking forward to more of your videos...

  • @minthway2736
    @minthway2736 Год назад

    This video is very effective for me. Thank You, Leo. You are the best👍

  • @cheng-tsuyu7300
    @cheng-tsuyu7300 4 года назад +2

    Your explanation is very concise and clear! It really helps me in my deep learning class. Thank you so much!

  • @yafesenessahiner4782
    @yafesenessahiner4782 2 года назад

    Awesome context, thank you for all of them. Great job!

  • @chadyonfire7878
    @chadyonfire7878 Год назад

    nice very general view of the topic , we need more of this

  • @kaoutharaarizou1914
    @kaoutharaarizou1914 5 лет назад +5

    Very nice overview ! Super-resolution is my ph.d researche field, and i am more interested in task-specific SR

  • @CodeEmporium
    @CodeEmporium 5 лет назад

    Nice! Gonna read up more on your references for this

  • @tonyjames9929
    @tonyjames9929 2 года назад

    The explanation about Data Processioning inequity was super, thank you (Y)

  • @RemyRAD
    @RemyRAD 4 года назад +2

    I am incredibly intrigued by Super Resolution. And while it is a great descriptive explanation of what, Super Resolution is. Other than the academic bookwork. How am I to get this for my historic, VHS, 4 x 3 productions.
    However when cropping 260 line resolution, color under video to, 16 x 9. I'm reducing that 260 lines of resolution to something like, 175-200 lines of resolution. Along with the need to bump it to 1080 P at 16 x 9. And I have some interesting and pleasant results using Vegas video editor. But not to the extent of this Super Resolution, process.
    So I'm preparing to archive some videos of a former Metropolitan Opera star from the late 1940s. With videos of her from the 1980s on VHS. And which already contains an excellent hi-Fidelity soundtrack in stereo. And would be much nicer to make the video look professional and acceptable.
    So your explanation and description is great. How do I do it? How do I get it in video? One frame at a time? Like Photoshop? Or, because my mathematical aptitude sucks so badly. This might be out of the realm of my capabilities? And it will just have to look amateur VHS fuzzy. Until this process is refined and becomes a single click plug-in. In your choice of video editing software. But not quite there yet. Oh well. I'll just have to wait a long time to come before that happens. At least a month or two.
    And what kind of resolution will we see. If Donald Trump gets reelected? As clear as shit. That's what. Clearing the way for 4 more years of insane shit.
    I vote for Super Democracy Resolution. We need an enhanced democracy with increased resolution. And political transparency. If that were only possible?
    I'm going to vote for increased democratic resolution
    RemyRAD

    • @leoisikdogan
      @leoisikdogan  4 года назад

      Wow, I have no idea how to reply to this :) As for the VHS footages, EDVR (github.com/xinntao/BasicSR) may work well. I haven't tried it but it seems to do a good job on video super resolution. As for the increased democracy resolution, I'm not a US citizen yet, therefore cannot vote. So, whomever Americans want to see in the oval office, I respect their choice.

  • @superaluis
    @superaluis 4 года назад

    Thanks for this great summary video. Best updated video on this topic for sure!

  • @harsha.n9332
    @harsha.n9332 4 года назад +3

    You are the most underrated guy....
    This is too much for me though 😅

  • @DinJerr
    @DinJerr 5 лет назад +2

    Can we avoid using the term 'hallucinate' to describe detail generation? Hallucination is conjuring up images without any external stimuli causing it. In the case of super resolution, you definitely need the current pixel information to determine what gets drawn in the inbetween space, and in the case of GAN, the model draws upon what images it was trained on to create those detail. None of these come out of 'nowhere', so by dictionary definition hallucinate is a wrong word to use.
    Some parties use the term 'dream' instead, and it's a bit more appropriate as it draws upon the knowledge/memory of the dreamer when making up its details.

    • @leoisikdogan
      @leoisikdogan  5 лет назад +1

      That makes sense. I didn't come up with the usage of the word hallucination in the context of generative neural networks though. It's been widely used for methods that generate information that cannot be inferred from the input alone.

    • @TheAkbar1000
      @TheAkbar1000 5 лет назад +1

      I believe that an illusion is the miss interpretation of sensory information, and hallucination is the addition of extra details which are not sensed by our sensory organs but are added by our brains... In that sense, this word seems quite appropriate for the action that it describes.

  • @antoinec5070
    @antoinec5070 3 года назад

    Fantastic video ! Summarising some papers so good

  • @hammerchu948
    @hammerchu948 3 года назад

    Amazing summary, great video~!

  • @AkhilBabel
    @AkhilBabel 5 лет назад

    Love your videos Leo. Please keep it up.

  • @morgan3913
    @morgan3913 5 лет назад +2

    Can you make a video summarizing some methods of applying computational photography techniques to smartphone cameras? More specifically, approaches that allow for more manual inputs compared to the automatic features found in the Google Pixel devices. I would love to see a video centered around the idea of capturing sensor data with a smartphone camera with the intention of using resource intensive techniques on a separate more powerful computer to process the image.

    • @leoisikdogan
      @leoisikdogan  5 лет назад +1

      The basic computational photography techniques that smartphones use are similar to the ones that other digital cameras use. I made a video about that earlier: ruclips.net/video/3E8DlKYKnO4/видео.html
      Beyond those basic processing methods, Google Pixel and other advanced smartphone cameras use more sophisticated burst image processing methods to make up for the shortcomings of a small imaging sensor. I might cover those methods in a future video.

    • @morgan3913
      @morgan3913 5 лет назад

      @@leoisikdogan I did see that video and as an explanation of the theoretical framework for image processing pipeline it was very good. However, I would like to see ways to apply computational photography techniques if one was to take a more manual approach as opposed to relying on the manufacturers implementation. This topic is relatively new to me so I may not be asking the correct questions. Basically, how much further can you push image quality of mobile photography beyond the manufacturer first party application?

    • @leoisikdogan
      @leoisikdogan  5 лет назад +1

      @@morgan3913 You might find Marc Levoy's work interesting. He has been working on this topic for quite some time and now is leading Google Pixel's camera team.
      Some of his papers:
      Burst photography for high dynamic range and low-light imaging on mobile cameras
      graphics.stanford.edu/papers/hdrp/hasinoff-hdrplus-sigasia16-preprint.pdf
      Pixel Night Sight
      ai.googleblog.com/2018/11/night-sight-seeing-in-dark-on-pixel.html
      Handheld Mobile Photography in Very Low Light
      arxiv.org/pdf/1910.11336.pdf
      There are also these papers from Intel Labs (not my team though):
      Learning to See in the Dark
      openaccess.thecvf.com/content_cvpr_2018/papers/Chen_Learning_to_See_CVPR_2018_paper.pdf
      Seeing Motion in the Dark
      vladlen.info/papers/DRV.pdf
      I guess even those papers alone have enough material for a new video :)

  • @AustinNGrayson
    @AustinNGrayson 3 года назад

    If a game plays in native 4K is there any reason to even turn it on? Does it cause input lag if I do turn it on? I have it on low right now in game mode.

  • @parsamadayeni
    @parsamadayeni 2 года назад

    Does the procedure affect the respawn time?

  • @이종학-t1j
    @이종학-t1j 11 месяцев назад

    how interesting! thanks for your explanation :) I clicked the 'good' bb

  • @aksshaysharma96
    @aksshaysharma96 2 года назад

    Does implementation of super resolution in a very rudimentary way implementation of AI?

  • @Rose-qs2gy
    @Rose-qs2gy 4 года назад

    Sir, can I do a super resolution project as my academic project,

  • @ttrkaya
    @ttrkaya 5 лет назад

    Enjoyed a lot! Thanks for the well-made video.

  • @Phobos11
    @Phobos11 4 года назад +1

    In the gaming world, we don't use the esrgan model for real images, as assumed in the video, we have trained our own purpose-built models for different cases

    • @leoisikdogan
      @leoisikdogan  4 года назад +1

      As far as I know, the early examples used ESRGAN trained on natural images. I would be interested to see the results of models trained on video game graphics. Do you have any checkpoints for such models?

    • @Phobos11
      @Phobos11 4 года назад +2

      @@leoisikdogan yes, you can find our public models in the database at upscale.wiki/wiki/Model_Database . More information and showcases in the subreddit (www.reddit.com/r/GameUpscale/), the Discord channel (discord.gg/cpAUpDK) and some custom modifications and tests to the BasicSR codebase in my fork at github.com/victorca25/BasicSR .

    • @leoisikdogan
      @leoisikdogan  4 года назад +3

      Thanks for the links. It looks like the community has grown a lot. There are many model checkpoints fine tuned for different tasks.

    • @Phobos11
      @Phobos11 4 года назад +1

      @@leoisikdogan Indeed, we worked on many different cases and a parallel group grew besides the game upscale one, more focused on animation upscaling. We ended up creating models for many tasks, in my fork I included the code to automatically augment images with different types of degradations, but the other guys did the hard work of creating the datasets and training the models. I've been unable to participate for a while, but it's a great community, last time I was there, there was interest in DAIN as well.

  • @lesleyeb
    @lesleyeb 4 года назад

    Great summary! Thanks!!

  • @yasinilyas2134
    @yasinilyas2134 3 года назад

    Thanks for the input!

  • @randomname4726
    @randomname4726 2 года назад

    Excellent video!

  • @bucoescobar1961
    @bucoescobar1961 3 года назад

    Very good video ! I came to this from some Convolution Neural Network videos while I was googling AMDs new Super Resolution Feauture (competitor to nvidias DLSS).
    Also did anyone ever tell you that you look like Quentin Tarantino‘s son? :D

    • @leoisikdogan
      @leoisikdogan  3 года назад

      Haha, yes I do get that sometimes :)

  • @shahrinnakkhatra2857
    @shahrinnakkhatra2857 3 года назад

    Can't we just create synthetic burst of images to recreate google's one?

  • @sourabhbhattacharya9133
    @sourabhbhattacharya9133 3 года назад

    there should be a super like option in RUclips for contents like this

  • @polaris911
    @polaris911 4 года назад +1

    what if you put the upscaled image into the ESRGAN input and upscale again? INFINITE RESOLUTION!

    • @leoisikdogan
      @leoisikdogan  4 года назад +2

      Upscaling artifacts become more visible every time you upscale. It seems to produce decent results for up to 16x upscaling in both axes.

    • @william_SMMA
      @william_SMMA 4 года назад

      @@leoisikdogan nice

  • @hsiang-yehhwang2625
    @hsiang-yehhwang2625 3 года назад

    Nice!! Thanks for the sharing!!

  • @marwanssalem
    @marwanssalem 4 года назад +1

    what about video super resolution

    • @abhiigg
      @abhiigg 4 года назад

      nvidia is on that

  • @watercui9346
    @watercui9346 4 года назад

    Very clear, bro, thank u.

  • @ihsankaratas2655
    @ihsankaratas2655 5 лет назад

    Thanks Leo. You are best

  • @jec_ecart
    @jec_ecart 4 года назад +1

    That madr my head spin a bit. However it's cool !👍

  • @ddkgurumani4770
    @ddkgurumani4770 2 года назад

    Very informative

  • @yusufsevinc609
    @yusufsevinc609 4 года назад

    thank you

  • @redaabdellahkamraoui7325
    @redaabdellahkamraoui7325 5 лет назад

    outstanding !

  • @GeoffHolman
    @GeoffHolman 3 года назад

    What's your view on the image processing experts in the Kyle Rittenhouse trial? I would love to see a review of the how accurate the expert advice was.

  • @ritwek98
    @ritwek98 4 года назад

    thanks...

  • @nasereddinehafidi7220
    @nasereddinehafidi7220 3 года назад

    Interesting results

  • @Kidbuuisstrongerthanbroly
    @Kidbuuisstrongerthanbroly 4 года назад

    This is fucking incredible

  • @vijayanand8217
    @vijayanand8217 3 года назад

    wow

  • @eyescreamcake
    @eyescreamcake 2 года назад

    The information in the training set does not count as information in the sense of data processing inequality. This is not actual super resolution; it's just educated guessing. All the details in *all* of these examples are completely made-up.

  • @umutcelik9821
    @umutcelik9821 Год назад

    abi türk müsün

  • @benbar5449
    @benbar5449 4 года назад

    This video has only 5k views? wtf.

  • @XX-vu5jo
    @XX-vu5jo 4 года назад

    What happened to this channel???

  • @RetroBulgaria
    @RetroBulgaria 5 лет назад

    I like when somebody proudly speak about thinks he do not understand. Is it talking about gay resolution?

    • @BenderdickCumbersnatch
      @BenderdickCumbersnatch 5 лет назад +4

      Someone speaking about things he does not understand? Ah, you are speaking about yourself, Markov.

    • @RetroBulgaria
      @RetroBulgaria 5 лет назад

      This from your perpspective. Because you do not understand NOTHING from this matter, you listen and say "Waw this guy is genius" but he only is "genius" in the blind eye of peoples that understand nothing like you. For peoples that seriously work in this domain, this "presentations" is dummy funny.

    • @BenderdickCumbersnatch
      @BenderdickCumbersnatch 5 лет назад +1

      @@RetroBulgaria Why are you so mad bro?

    • @RetroBulgaria
      @RetroBulgaria 5 лет назад

      Because I'm tired of amateurs (claim to be professionals), dummies, morons, stupid peoples etc. that filled western world.

    • @BenderdickCumbersnatch
      @BenderdickCumbersnatch 5 лет назад +3

      Your over-reaction to a nicely explained summary video is insane. And you called him gay for no reason and without proof. You seem to be both rude and stupid at the same time. ;-)

  • @vladimirbosinceanu5778
    @vladimirbosinceanu5778 2 месяца назад

    Great video!