Traefik vs. Nginx performance benchmark

Поделиться
HTML-код
  • Опубликовано: 20 ноя 2024
  • НаукаНаука

Комментарии • 86

  • @AntonPutra
    @AntonPutra  Год назад +1

    🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com

  • @Openspeedtest
    @Openspeedtest Год назад +62

    This is why Nginx is the undisputed King for superior performance with minimum resource utilization.

    • @AntonPutra
      @AntonPutra  Год назад +5

      For now =)

    • @tcurdt
      @tcurdt Год назад +7

      and the configuration is less of a nightmare the traefik config is

    • @nikolas4786
      @nikolas4786 Год назад +4

      And security nginx is the best

    • @altairbueno5637
      @altairbueno5637 Год назад

      Until pingora arrived

    • @tcurdt
      @tcurdt Год назад

      @@altairbueno5637 AFAIU it hasn't been open sourced yet. Or did I miss that?

  • @AntonPutra
    @AntonPutra  Год назад +4

    ❤Go (Golang) vs Node JS (Microservices) performance benchmark - ruclips.net/video/ntMKNlESCpM/видео.html
    ❤Go (Golang) vs. Rust: (HTTP/REST API in Kubernetes) Performance Benchmark - ruclips.net/video/QWLyIBkBrl0/видео.html
    ❤AWS Lambda Go vs. Rust performance benchmark - ruclips.net/video/wyXIA3hfP88/видео.html
    ❤AWS Lambda Go vs. Node.js performance benchmark - ruclips.net/video/kJ4gfoe7gPQ/видео.html
    ❤AWS Lambda Python vs. Node.js performance benchmark - ruclips.net/video/B_OOim6XrI4/видео.html

  • @javohirmirzo
    @javohirmirzo Год назад +6

    recently i started getting your videos recommended. they are very interesting indeed. keep up the good work!

  • @kingsathurthi
    @kingsathurthi Год назад +5

    Very interesting to see the performance difference, thanks for making this video

  • @yabokunokami8418
    @yabokunokami8418 Год назад +6

    Your videos are really great. Keep up the good work!🔥

  • @blender_wiki
    @blender_wiki 9 месяцев назад +1

    The best, or even the only real, comparison video I found on yt. 🙏🙏🙏
    Nice testing protcol.

    • @wanarchives
      @wanarchives 8 месяцев назад

      yeah, his video is mind blown to me, i never seen any utuber does benchmarck like this guy....super nice

  • @danielviloria5675
    @danielviloria5675 Год назад +1

    Te amo, este es el tipo de preguntas que quería resolver

  • @КириллКириллович
    @КириллКириллович Год назад +3

    Годно, спасибо!
    Было очевидно, что nginx - это слишком мощный и хорошо написанный инструмент, повторить его результаты на языках с gc почти невозможно. Буду использовать это видео как аргумент 😊

    • @AntonPutra
      @AntonPutra  Год назад +1

      pojalusta, hochu s linkerd proxu sravnit, govoryat bistry :)

    • @КириллКириллович
      @КириллКириллович Год назад

      @@AntonPutra буду ждать такое сравнение, подпишусь, чтоб не пропустить)

    • @AntonPutra
      @AntonPutra  Год назад

      @@КириллКириллович ))

  • @samelie
    @samelie Год назад +1

    Thank you for sharing these well designed tests - am learning a lot!

  • @Phyx1u5
    @Phyx1u5 Год назад +2

    Thanks for the video. I do like traefik for the simple fact it's a bit more noob friendly when using with kubernetes

  • @Nick-yd3rc
    @Nick-yd3rc Год назад +6

    Thanks, really like your videos. Would appreciate a comparison with haproxy next time

  • @picatchumm64
    @picatchumm64 Год назад +3

    Hi, Thanks, I like your videos. Would appreciate a comparison with Traefik vs Caddy 2 next time

  • @---tr9qg
    @---tr9qg Год назад +1

    It was .... deep and pro 🔥🔥🔥

  • @dgjtf
    @dgjtf Год назад +2

    great! would like to see benchmark about envoy proxy

  • @kamurashev
    @kamurashev Год назад +4

    Interesting, I used apache (httpd) as a reverse proxy, historically mostly, interesting how would it compare.
    And such surprising results for grpc, I would never guess it can be this way. Although I’m wandering wether the actual backend service implementation could affect the results somehow. Don’t see any legitimate reason for proxies to behave this way.
    I can see the request time much longer for grpc, so longer lasting connections could possibly consume more resources on the proxy side. It seems the answer may be in the app. I think so. Backend analysis might help to figure out.
    Although in general I surprised how poorly the grpc system behaves. I thought it’s kinda the holy grail for the low latency systems. I would definitely appreciate more in depth analysis of the topic.

    • @AntonPutra
      @AntonPutra  Год назад +3

      Thanks Kyrylo for the feedback. I'll try to figure it out.

  • @alt404s
    @alt404s Год назад

    I just want to say thank you for these very interesting videos. I want you to know that you've been helping me improve in my career and get better jobs. Thank you sir! (I'm subbed!)

  • @altverskov
    @altverskov Год назад +1

    Great video!

  • @mikolajsemeniuk8574
    @mikolajsemeniuk8574 Год назад +1

    🚀🔥❤

  • @LampJustin
    @LampJustin Год назад +2

    Does anyone know how an envoy based reverse proxy compares? I think of something like contour

  • @JRRRRRRRRRRR
    @JRRRRRRRRRRR 7 месяцев назад

    Great video, thank you :)
    Could I consider the NGINX performance the same as the NGINX Proxy Manager (NPM)?

    • @AntonPutra
      @AntonPutra  7 месяцев назад

      Based on the description of the project, yes, they don't 'enhance' code functionality, mainly TLS.

  • @Taurdil
    @Taurdil Год назад +2

    When you say "HTTP/1" here do you really mean 1.0 without keepalive or 1.1 with keepalive? And from the client perspective, do we actually close socket each time?

    • @AntonPutra
      @AntonPutra  Год назад

      They both support keepalive, not sure about the latter

  • @nootajay
    @nootajay Год назад +1

    How is traefik different from envoyproxy? I know it's a fork of envoy but is it designed for edge proxy?

  • @sunilkumar-xp7jz
    @sunilkumar-xp7jz 5 месяцев назад

    Traefik API GW Installation guide, websocket support required 💐

  • @stephen.cabreros
    @stephen.cabreros 5 месяцев назад

    what benchmarking platform do you use?

    • @AntonPutra
      @AntonPutra  5 месяцев назад +1

      In that specific case, I used AWS and t3a.small instances. I ran tests multiple times (creating new EC2 instances each time) with the same results.
      github.com/antonputra/tutorials/blob/main/lessons/144/terraform/10-traefik-ec2.tf#L3
      github.com/antonputra/tutorials/blob/main/lessons/144/terraform/11-nginx-ec2.tf#L3

    • @stephen.cabreros
      @stephen.cabreros 5 месяцев назад

      @@AntonPutra thanks bro, that monitoring with traffic and latency graph is it part of aws service or another platform too?

    • @AntonPutra
      @AntonPutra  5 месяцев назад

      @@stephen.cabreros It's open source prometheus and grafana, i have all components and dashboards in my repo just in case you want to reproduce

    • @stephen.cabreros
      @stephen.cabreros 5 месяцев назад

      @@AntonPutra ok I'll check it, thank you for this

  • @yuryzhuravlev2312
    @yuryzhuravlev2312 Год назад

    nginx drop requests because can process more, but OS didn't give enough resources - you should change the limits.

    • @AntonPutra
      @AntonPutra  Год назад

      you mean file descriptors? to much customization, prefer to use defaults for tests..

    • @incseven
      @incseven Год назад

      @@AntonPutra default value of worker_connections is smth about 768 (multiple by number of CPU for default value "worker_processes auto")

  • @John-vm7fq
    @John-vm7fq Месяц назад +1

    Nginx vs Pingora

  • @xentricator
    @xentricator Год назад

    Isn't using burstable vms a bad idea to do these kind of tests? You don't really have any control over when the VM bursts or not.

    • @AntonPutra
      @AntonPutra  Год назад

      I ran this test at least 4 times (creating and deleting vms), each time result was the same.

    • @xentricator
      @xentricator Год назад

      @@AntonPutra I think this is maybe due to the k6 load test being pretty simple (x amount of users for y time without any fluctation or ramp up and downs which could result in more bursty workloads instead of pegging the cpu in a pretty constant way). You should watch out in future videos when trying to create more advanced scenarios in combination with burstable vms.
      To be clear, I am not trying to undermine your testing methodology, I really like your videos.

    • @AntonPutra
      @AntonPutra  Год назад

      @@xentricator Thanks, I'll keep it in mind

    • @Davidlavieri
      @Davidlavieri Год назад +1

      Certainly, even with "unlimited" cpu credits (t3 default) it still throttles the CPU, should be ran in compute instances to see the difference and if it does affect the result

  • @Hohmlec
    @Hohmlec 2 месяца назад +1

    Nginx vs pingora

  • @nikitat6750
    @nikitat6750 Месяц назад

    02:11 там фраза «то есть»?)

  • @SheeceGardazi
    @SheeceGardazi 6 месяцев назад

    gg

  • @KVS797
    @KVS797 5 месяцев назад

    Please try Pingora

    • @AntonPutra
      @AntonPutra  5 месяцев назад +1

      ok i'll take a look

  • @Яслежузатобой-щ7б

    kupil tesla?

    • @AntonPutra
      @AntonPutra  Год назад +2

      in last march, they dropped 15k today :(

  • @user-dz6il2bx5p70
    @user-dz6il2bx5p70 Год назад

    So Go sucks?

    • @AntonPutra
      @AntonPutra  Год назад

      Not at all. It's great for beginners and easy to find implementation for anything you're trying to solve.

    • @jonnyd6087
      @jonnyd6087 Год назад

      ​@@AntonPutra great for beginners!? Nginx is c/c++ golang isn't competing with a behemoth like that! Otherwise golang is killer.

    • @TheLovealien
      @TheLovealien Месяц назад

      @@AntonPutra Tell that to Google, Docker and all the other companies that use Go on a massive scale

  • @nicholaschin97
    @nicholaschin97 Год назад

    are you half-indonesian ?