System Design: Online Judge for coding contests

Поделиться
HTML-код
  • Опубликовано: 2 фев 2025

Комментарии • 174

  • @gkcs
    @gkcs  3 года назад +47

    Thanks for listening! This video took me 15 minutes to shoot haha. Maybe I should do more short videos since they pack info so efficiently :D
    You can find many more videos at get.interviewready.io!
    Wishing you all a great day ahead!

    • @sudipkumarsengupta1003
      @sudipkumarsengupta1003 3 года назад

      these are really helpful sir

    • @dharmatejaP
      @dharmatejaP 3 года назад

      @Gaurav Sen can u help me with what are the prerequisites to learn system design?

    • @Rajesh-rg6fw
      @Rajesh-rg6fw 3 года назад

      Why don't you start a full course on this it's it's really going to help people alot, regarding ds, algo, system design, low level design all.

    • @varshakampli6035
      @varshakampli6035 2 года назад

      Yes please. Love the short format for a quick interview prep

    • @sankalparora9374
      @sankalparora9374 Год назад

      Definitely!
      Some LLD sessions would be great as well!

  • @HARSHITSHARMA-yv8vj
    @HARSHITSHARMA-yv8vj 3 года назад +18

    Faced exactly same question in an interview and gave very similar approach as the answer and yes the discussion went long. Glad to see this video and thank you so much Gaurav for the helpful content, surely recommend your System Design Course.

  • @raghavsharma9224
    @raghavsharma9224 3 года назад +44

    This man can prolly explain my physical body system design inside out lol.

    • @nativeKar
      @nativeKar 3 года назад +2

      That'll make him Doctor Gaurav, won't it?

  • @kanuj.bhatnagar
    @kanuj.bhatnagar 3 года назад +17

    And the advantage with a container like Docker is, that you can easily assign each container a set amount of memory and test whether that code executes within that span of memory or not. Also provides a safeguard against any user trying to eat up memory with malicious code.

    • @stealthrabbi9064
      @stealthrabbi9064 2 года назад

      a container would also be locked down considerably. e.g. you wouldn't allow the container to reach out to the internet.

  • @om_ashish_soni
    @om_ashish_soni 11 месяцев назад +2

    I had implemented my own online judge and happy to see today that my design was approximately similar to Gaurav's design, thanks

    • @gkcs
      @gkcs  11 месяцев назад

      Cheers!

  • @hardikmenger7360
    @hardikmenger7360 3 года назад +32

    This short format is way too good.

  • @JatasanVietnamese
    @JatasanVietnamese Год назад +3

    Every second of this video is packed with so much good information. Thank you!

    • @gkcs
      @gkcs  Год назад

      Thank you!

  • @_ArjitKhare
    @_ArjitKhare 7 месяцев назад

    nice and concise explanation, just feels so good to know how the platforms work which I have been using till now, exciting to know it contains such a depth.

  • @sankalparora9374
    @sankalparora9374 Год назад

    Very interesting video - every DSA solver would want to know this.
    It was simple to understand - not because it WAS simple, but because the base you made in the previous videos.
    Thanks!

    • @gkcs
      @gkcs  Год назад

      Glad it was helpful!

  • @royarijit998
    @royarijit998 3 года назад +37

    "`rm -rf` things that engineers like me like to do"
    Ah, I see you're a man of culture as well. ;)

    • @indsonusharma
      @indsonusharma 3 года назад +6

      I will try this command now on every platform
      Excited to see how all the platform behaves

    • @sankalpkotewar
      @sankalpkotewar 3 года назад +1

      @@indsonusharma Lol... I read somewhere it's kinda against their policy. So maybe try using a new account. :p

    • @rohitmahto1559
      @rohitmahto1559 2 года назад

      that made me laugh 🤣🤣

  • @ashishgohel8659
    @ashishgohel8659 3 года назад +3

    We can also create various docker environments for supported languages and then create dedicated topic from there each container will be fetching user data and code and execute. Also, depending upon the density of number of users for any given language, we could go for horizontal scaling and having LB on top of that.

  • @varungoel6981
    @varungoel6981 3 года назад +10

    I think the biggest challenge here would be providing such a runtime environment to the program where we can accurately measure the time it takes to get executed!

    • @nitinpandey7478
      @nitinpandey7478 3 года назад +5

      I dont think you will have to measure the time, what you can do is run a command with default timeout

    • @Aspartame12
      @Aspartame12 3 года назад +1

      it isn't that accurate, same code executed twice can return different time value, docker containers with standard configuration is used each time.

    • @sivakumar-ho3mw
      @sivakumar-ho3mw 3 года назад

      Yes, I agree it will take some runtime when we create a docker containers with the image for first time,but second time the image layer will exist in the kernel so second time if you create the container with the same image it won’t take more than a second to create the environment..

  • @j24k8
    @j24k8 3 года назад

    Congrats Gaurav, just saw the ring on your finger :) And thank you for the amazing works as always!

  • @panhejia
    @panhejia Год назад

    Fast, concise and to the point! Thanks!

  • @vaibhavmehta36
    @vaibhavmehta36 3 года назад

    This short video format is time-saving for all of us :)

  • @coder3101
    @coder3101 3 года назад +12

    Instead of using docker we can use seccomp which is used by docker to enforce Sandboxing so instead of having a complete container runtime to execute user code we can sandbox the user process.
    Edit: seccomp in layman terms is "firewall for syscall"

    • @krisjanispetrucena2642
      @krisjanispetrucena2642 3 года назад

      Containers have low system overhead as opposed to virtual machines and I was told that seccomp isn't straightforward when running interpreted languages.

  • @manavyadav4382
    @manavyadav4382 3 года назад

    Learned so much in 8 minutes. Keep em coming!

  • @pathikshah
    @pathikshah 3 года назад

    As always Video Quality OP. 🔥 Great Content. 🙏🏼

  • @maheshguptha9796
    @maheshguptha9796 3 года назад

    I love these videos I am learning MERN stack trying to become full stack developer but system design is intresting.
    I am learning redux now next week backend starts I am excited

  • @senchi0kodo
    @senchi0kodo 3 года назад +1

    Notification squad lol
    brilliant as always. Love your content. Not a lot of channels cover this. Very unique. Thank you!!

    • @gkcs
      @gkcs  3 года назад

      Thank you!

  • @viveksai9353
    @viveksai9353 3 года назад +1

    Just what i was looking for. Thanks buddy.

  • @jenilpatel506
    @jenilpatel506 3 года назад +6

    If we are considering this platform for practice and online contests, then it would be benificer to use two different queues for code execution. So that when contest is running and somebody uploads code for practice, we can prioritize contest submissions first.

  • @kr4k3nn
    @kr4k3nn 3 года назад

    Great explanation... Really look forward to your future videos

  • @sohamshah2127
    @sohamshah2127 3 года назад

    GAURAV YOU BE THE BEST! KEEP ON CREATING SUCH AMAZING CONTENT

  • @savesnine
    @savesnine 3 года назад +9

    Why go right to a message queue? Why not use a load balancer and scale the number of servers? I would ask first what are our costs (maintenance vs financial) before jumping to a solution of a message queue. It's not a bad solution it just seems a bit fast to jump to this conclusion

    • @segue97
      @segue97 3 года назад

      The financial cost towards running should be the biggest reason. It's also harder ensuring distributed systems remain CP/AP when load balancers come into the mix since we are working with several services.

    • @parthikpatel6108
      @parthikpatel6108 3 года назад +2

      Load leveling with messages queues helps with unpredictable/spiky traffic, which is likely during a coding contest (people submitting their answers towards the end). In the event of a traffic jump, even with a LB you can end up dropping some requests

  • @apoorvbedmutha457
    @apoorvbedmutha457 Год назад

    What else can be added:
    1. A git like logic where only the changes made to the previous code is being send
    2. The server can be broken down into services such as Code Builder, Code verifier, Code executor
    3. The code builder will receive the changes made and will compare it with the user's previous code and build the new code and send it to verifier
    4. code verifier is responsible for detecting any malicious code present
    5. code executor should apply a time limit which allows it protect itself from denial of service due to infinite loops or too long codes
    6. Some rate limiting though not very strict must be implemented such as a user can submit atmost 1 code every 1 sec to protect the server from overloading
    7. No of users or no. of requests must be taken into consideration and the size of request buffer, no. of servers, availability zones for servers, load balancer can be discussed.

  • @dhareppasasalatti7102
    @dhareppasasalatti7102 3 года назад +2

    Super explaination.. ❤💯💯

  • @chandernagamalla4476
    @chandernagamalla4476 3 года назад +3

    An addon layer for security could be, to use source code analysis tools for checking vulnerabilities before executing it.

    • @lazarus823542
      @lazarus823542 3 года назад

      going to take way too long, isn't reliable enough and prone to false positives

  • @indsonusharma
    @indsonusharma 3 года назад +1

    It's really nice video bhaiya crip and easy to understand
    Thank you so much 😊

    • @gkcs
      @gkcs  3 года назад +1

      Thanks for watching :D

  • @prakhar8690
    @prakhar8690 3 года назад

    Great video, thanks for sharing!

  • @dhmilmile1
    @dhmilmile1 3 года назад

    Thank You for great explanation.

  • @ibrahimshaikh3642
    @ibrahimshaikh3642 3 года назад

    I was waiting for this,
    And another topic email system design

  • @darthvader_
    @darthvader_ 2 года назад

    This was a great video!

  • @nickkarmic9527
    @nickkarmic9527 2 года назад

    Thanks Gaurav! This is a great short video format, especially when the info is imparted efficiently!
    Can an alternative approach of using AWS lambdas or Azure/GCP functions be viable?
    1) With Just-In-Time compute hosts don't have to pay for maintaining long running services 2) Ideally suited for extreme burst traffic during contests 3) They can scale to hundreds of thousands of requests per second 4) Serverless invokes your code in a secure and isolated environment

  • @Omsy828
    @Omsy828 2 года назад +2

    Can you add some more detail to this design? Are you using S3 for storing the code? How will you cache this code for when the user logs back in and wants to work on his code again? What about the database type and how you are sharding? Are you going to cache questions. Would love to see a follow up for this from you

  • @chitranshuchangdar7353
    @chitranshuchangdar7353 3 года назад

    Hi Gaurav,
    I watch your system design videos a lot and the way you explain that is awesome. I really appreciate you for making such videos and hope to do the same in the future as well.
    Just a request if you can do an AWS S3 system design video.

  • @md.ahsankabir2569
    @md.ahsankabir2569 2 года назад

    Nicely explained

  • @RaviTeja5
    @RaviTeja5 Год назад

    Thanks for the video!
    How do we handle when a docker container or service crashes after reading from the queue? How do we rerun the job, assuming that this code job is already dequeued and we're not keeping track of which machine is processing what.
    One solution could be to use a persistent storage like a ke-value db or sql db, instead of a queue. The records could have the jobId, status and the machineId that's working on it. Before a new machine/container reads in a job, it checks to see if there are any "IN_PROGRESS" jobs under its id. if so, it means that it has crashed while processing this job earlier and is now good to run it again. Or maybe, a heartbeat server that keeps track of machine's health and job status.

  • @charan775
    @charan775 3 года назад +2

    An ubuntu container image with all the compilers and spinning it up for execution would do the job :)

    • @gsb22
      @gsb22 3 года назад +1

      Believe me you wont want heavy ubuntu container. Most like minimal docker images like alpine.

    • @blasttrash
      @blasttrash 3 года назад

      so if one python program crashes the container, even other running codes from java, cpp etc will also crash?

    • @charan775
      @charan775 3 года назад

      @@blasttrash you can spin a new container for each execution

    • @blasttrash
      @blasttrash 3 года назад

      @@charan775 but wont that be slow? container startup takes time right?

    • @charan775
      @charan775 3 года назад

      @@blasttrash if it's lightweight image it's not going to take much time.

  • @swagatochatterjee7104
    @swagatochatterjee7104 3 года назад

    If you are running the code in the docker files. Expect it to run sluggishly, it's better to directly use cgroups.

  • @shivaprasad8142
    @shivaprasad8142 3 года назад

    Good one 👍👏, we only create container when such faulty code gets executed on the container crashes, we'll it need be configured properly by Operations team

  • @gauravgarg031
    @gauravgarg031 3 года назад +2

    How user will be notified of execution result ?? Web socket connection or user periodically calls some API to get the result ??

    • @suhasnama2795
      @suhasnama2795 3 года назад

      I choose Web sockets over periodic API calls to get the result. Because, if for a submission N periodic API calls were made , then for M active users submitting their code , that would be N*M HTTP/S connections made with our service. And the value of N (no of calls) increase if M (active users submitting) increases. But we need to maintain only M active web socket connects at any given point of time. But would love to see other ideas on notifying user. What do you think @Gaurav Sen

    • @utkarshgupta8061
      @utkarshgupta8061 3 года назад

      @@suhasnama2795 Check Server Sent Events once.

  • @hans2400
    @hans2400 3 месяца назад

    Tryna build something like this, however I think a lot of people keep skipping over the actual test case part. How would that work, for example test this code for these 10 test cases?

  • @rishabhanand4270
    @rishabhanand4270 3 года назад +2

    Hmm, wouldn't you suggest a serverless architecture here? Like a bunch of lambdas (AWS lingo) ready to execute your code and after execution submit their reports to a message queue which ultimately writes to a database?

    • @charan775
      @charan775 3 года назад

      +1. want to know how this will work

    • @gsb22
      @gsb22 3 года назад

      That's a good point. In case of contests, you would want to scale as quickly as possible but that would be a specific case.

  • @CarbonRiderOnline
    @CarbonRiderOnline 3 года назад +9

    Definitely, not a viable solution as spinning containers for each submission is not only time-consuming but not effective from a cost perspective. The appropriate approach will be policy/permission-based execution. This not only ensures security but also reduces turnaround time.

    • @vhawkins8289
      @vhawkins8289 3 года назад +2

      I had this thought too. You could also use the same container for all submissions until some code makes it crash.

    • @DodaGarcia
      @DodaGarcia 2 года назад +1

      @@vhawkins8289 Those are interesting points which I hadn't realized when watching the video. One question I have: without creating one container per code run, even if the execution is secure by way of permissions, doesn't it becomes less reliable that the code that was just executed didn't leave any side effects that might affect the next code run? Aside from just security, I thought the single-use containers might be good to guarantee a fresh environment each time.

  • @jeetk
    @jeetk 2 года назад

    Hi Gaurav, I want to add few more requirement in this design and want to see how this design evolves. Lets say, we now want to maintain scores for each users for various problems, and for each such contest, we want to have a leaderboard(users sorted in terms of total score, as well as my current position in the leaderboard). I had recently faced this in an interview and ended up designing leadeboard as something which was computed on runtime, which wouldnt scale well, hence asking this.

  • @sirjansingh310
    @sirjansingh310 3 года назад +2

    Containerisation is the way to go, agreed. But isn’t creating and destroying containers in real time on demand expensive? I am thinking of a pool of containers created ahead of time be the solution. Or is it something else?

    • @sivakumar-ho3mw
      @sivakumar-ho3mw 3 года назад

      For an example if you have 100 users uploading their code and you have pool of 1000 docker containers expecting 1000 user will upload their code .At that time only 100 docker containers could be utilised and other 900 will remain idle,I hope less resource utilisation will reduce most of your cost of the system

  • @dev__adi
    @dev__adi 3 года назад +1

    I wished you delved into the security and containers part more

  • @TheIndianGam3r
    @TheIndianGam3r 3 года назад +1

    I'm confused on one part here . How would the users get the verdict in the desired time ? won't they have to wait longer as the async queue grows ?

    • @KaiAble0601
      @KaiAble0601 3 года назад

      yeah users indeed don’t see the results immediately in competitive programming.

    • @charan775
      @charan775 3 года назад

      I think you can scale the server plus you can execute multiple events at once

    • @viniciusmateus7979
      @viniciusmateus7979 3 года назад +2

      The queue is there to handle this insertions in a dedicated service for this. With that in mind, you can scale this service if in need for performance. The most important concept here is that you do not kill your database for multiple requests.

    • @omkarajagunde4175
      @omkarajagunde4175 3 года назад

      Well there can be multiple servers who can peek() on queue to execute codes and reduce the user time

  • @akashrajpurohit97
    @akashrajpurohit97 3 года назад +6

    If asked this question, "how would you solve the cold start problem when using docker containers?" What would one answer? Can we have some containers already created for some languages for eg x containers for python, y containers of cpp etc? Or any other method for solving this?
    PS I'm new to system design so anyone reading this, let me know if I'm going in wrong direction :)

    • @JenilCalcuttawala
      @JenilCalcuttawala 3 года назад

      Was wondering the same. Waiting for someone to answer.

    • @JenilCalcuttawala
      @JenilCalcuttawala 3 года назад

      Btw if I’m not wrong, what Gaurav has mentioned is the approach where you’ll be spinning up ephemeral containers everytime the submissions are done. Hence separate environment for each submission.

    • @JenilCalcuttawala
      @JenilCalcuttawala 3 года назад

      Okay, in one of the comment he has mentioned that we can reuse containers. Now I’m more curious.

    • @coder3101
      @coder3101 3 года назад

      @@JenilCalcuttawala Sandboxing the code with seccomp is a better approach.

    • @ashishgohel8659
      @ashishgohel8659 3 года назад +1

      Well, what you can do is to have containers up and running for various environments/languages and correspondingly, depending upon the candidate's choice of language, code will be pushed to its separate channel/topic (for each container, there will be a separate topic from where it will be fetching code and executing sequentially, you can even have a policy expand it horizontally and put a load balancer on top of that).

  • @shivambedwal3285
    @shivambedwal3285 3 года назад +1

    Could we have used an Elastic Container Service, which would also scale the containers as per the load ?
    Also, a serverless function could also fit the requirement here, as it scales automatically and provides an environment to run code.
    Or, would a load balanced set of VMs which scale, also suffice for this sort of a scenario ?

  • @rapyxroyals4585
    @rapyxroyals4585 3 года назад

    Thanks a lot.. can we get one design video for Kafka ?

  • @SanjayGupta-ii8hh
    @SanjayGupta-ii8hh 3 года назад +1

    Nice 👍

  • @Username-gu3un
    @Username-gu3un 3 года назад

    I was asked about this specific question in Meta interview. I think the interviewer was trying to challenge me, he asked me to dive deep into the code execution part, how does the code execution work. I was not able to answer that question very well.

  • @trialaccount2244
    @trialaccount2244 3 года назад

    Don't you think that containerization is more like hiding vulnerabilities with some infrastructure ? Is there any other way we can do this?

  • @aburifatm
    @aburifatm Год назад

    Thank you so much

  • @AvijoyBhowmick
    @AvijoyBhowmick 3 года назад

    Great explanation. would love to learn more about virtualization containers.

    • @gkcs
      @gkcs  3 года назад

      Try ruclips.net/video/GOuVeZmMee0/видео.html

  • @himanshugarg357
    @himanshugarg357 3 года назад

    Which is better for starting startup coded by one person. Php or Full stack? Easy and best.

  • @mpataki
    @mpataki 2 года назад

    Uploading to S3 where most of the code is below a MB?

  • @vigneshkumarganesan1529
    @vigneshkumarganesan1529 3 года назад

    Thanks for the video it’s short and easy to understand.
    I have question, is it good approach launch docker container for each code execution ? It will take time some more time to launch container right ?

  • @pradeeshbm5558
    @pradeeshbm5558 2 года назад

    Can we introduce something like 'code-minifier', and check if same code already submitted in past, if so, immediately return the result (no need to execute the same code)?

  • @sujan_kumar_mitra
    @sujan_kumar_mitra 3 года назад +1

    Generally, programs need certain permissions from OS to run specific kinds of code.
    Like network calls, accessing disks can be blocked by setting up those checks. Will it be better than containerization?

    • @gkcs
      @gkcs  3 года назад +1

      Yes it will. Limiting resource usage will be nice with containerisation though. You can also reuse a virtual OS or containers for different code executions.

  • @himanshuchitranshi957
    @himanshuchitranshi957 3 года назад +1

    Just want to ask that does it make sense to store incoming user code in some sort of DB/file repo with its metadata with message queues to ensure that if the server crashes, we will be able to build the message queue once again with the correct sequence?

    • @ashishgohel8659
      @ashishgohel8659 3 года назад

      you could have another async call to another service or perhaps another service subscribed to the channel who's primary responsibility is to store user details and code in mongo or create a file against the userId.(of course, you need to be careful about snippets like "rm -rf" ) :)

  • @gouthamnagraj5445
    @gouthamnagraj5445 3 года назад

    Bro explain how the challenge of code testing, because what we write is just the logic and not the whole program in Leetcode. In other cases it start from main(). We want to know how the binaries get compiled and injected while driver code binaries remain constant. Like what strategies can be used

  • @mpty2022
    @mpty2022 Год назад

    are you talking about DOMjudge? I didnt catch the name

  • @akshitbansal352
    @akshitbansal352 3 года назад

    I was thinking of something like creating child processes for each execution. When there are a large number of users at the same time requesting to execute their code, more number of processes simply means slow execution of individual codes. For security, how about we prevent the users from including certain libraries in their codes?

  • @eduartpunga
    @eduartpunga 3 года назад

    How did you color the computers in red behind you 0:16 ? I don't understand how the red color did not overlap with yourself if u are not using a greenscreen. Or did you use a greenscreen for like a few seconds when explaining at that moment knowing the position of your diagram elements ?

    • @eduartpunga
      @eduartpunga 3 года назад

      Oh ok, you are using like a color detection thing, cuz at 2:49 it overlaps with your marker. But I'm still curious how and in what program you do it.

    • @gkcs
      @gkcs  3 года назад

      Step by step:
      1. Open Adobe Premiere.
      2. Copy the camera footage layer and place it over the original layer. You now how 2 layers of camera footage one on top of the other.
      3. Ultra key the board color out (color detection and removal) of the top layer.
      4. Place a color between the two camera layers (Color matte in Premiere).
      5. Adjust the ultra key settings as requirement (mask only the conflicting zones).
      6. Magic ;)
      I discovered this with trial and error :D

  • @alasmith7458
    @alasmith7458 3 года назад

    Great Video as always. One question, Do we need to create containers for every submission?

  • @pprathameshmore
    @pprathameshmore 3 года назад

    Today just brought his course

  • @aki20947
    @aki20947 3 года назад

    Can we run different lamdas to run those codes? If we identify which programing language is used? This will give us more scalable micro service system and more fault tolerant for those rm commands.

  • @peiqingsong1408
    @peiqingsong1408 3 года назад

    Shouldn’t we have load balancer in front of servers, if a single server cannot afford the amount of users?

    • @divyanshujuneja1291
      @divyanshujuneja1291 3 года назад

      Like he said in the video, we are taking a 10000 ft. view of it. That box will obviously be having not just load balancers but a lot of other microservices.

  • @omkarajagunde4175
    @omkarajagunde4175 3 года назад

    Maybe this is an immature question but as per the queue structure we are executing the codes in containers asynchronously, so how will server tell the user when his codes results are ready will it be a stateful session like "websocket" or client/user will hit a rest api on some interval for getting his results?
    Thankyou in advance

    • @gursharanaulakh6882
      @gursharanaulakh6882 3 года назад +1

      In my opinion, client will keep retrying rest api calls to get processed output with some exponentially increasing frequency till the time result is not available. There will be default timeout as well incase code execution is not completed in the max dedicated time to throw execution error to the user.
      There is "Retry-After" header which can be used to tell client that pls retry after this duration.

    • @omkarajagunde4175
      @omkarajagunde4175 3 года назад +1

      Yes that seems viable solution, thanks for answering

    • @utkarshgupta8061
      @utkarshgupta8061 3 года назад

      Or since we're looking at pushing data from server to client only and not the other way around, we can also look at Server Sent Events over HTTP. This would ensure almost instant delivery of messages to the client, instead of user trying over and over again. And you won't need to deal with websockets too.

  • @unleash_vlogs
    @unleash_vlogs 3 года назад

    How about horizontal scaling over here to speed up, does it make sense?

  • @dnhirapara
    @dnhirapara 3 года назад

    Isn't creating containers for every new request are time consuming? How it scales well if we create containers for every request?

    • @gkcs
      @gkcs  3 года назад +5

      We don't. We reuse containers.

  • @ngneerin
    @ngneerin 3 года назад

    Is it better to use S3 which is an object storage or file storage system?

  • @sipwhitemocha
    @sipwhitemocha 3 года назад

    Not sure why but 2:21 looks so animated.
    Anyway, thanks for the video Gaurav

  • @rahulbera454
    @rahulbera454 3 года назад

    Can we create a container of specific CPU and RAM capacity programmatically ?

  • @hamsalekhavenkatesh9664
    @hamsalekhavenkatesh9664 3 года назад

    Great Video Gaurav! thanks ! Also, when we containerize. we can use some sort of auto-scaling behind the screens to dynamically allocate the containers. or we can use lambdas to execute the script over these containers as well ?

  • @satish1012
    @satish1012 3 года назад

    Are Lambda functions best for this use case ? Of course this has lock in with vendor

    • @gkcs
      @gkcs  3 года назад

      Could turn out to be expensive, but it is possible. At InterviewReady, we use lambda functions only for intermittent and rare tasks.

  • @harishankar-cz9tx
    @harishankar-cz9tx 3 года назад

    is it safe to re-use container?
    something like, once you create a container, keep executing code in it until it crashes?
    or, you create container for a user and then keep running that user's code in that container?
    I feel it should not be safe but not sure since bringing up a new container every time we need to run a code will be time consuming.

    • @divyanshujuneja1291
      @divyanshujuneja1291 3 года назад

      We can probably have scripts to reset the configurations of the container right before running the user program in it.

  • @sridharchaitanyagudur7462
    @sridharchaitanyagudur7462 3 года назад +1

    Very thoughtful. I really would like to listen from you on how software systems are designed for space crafts, rovers that travel in space. Like what communication protocols etc ?

  • @sj259
    @sj259 2 года назад

    Please do video on websockets

  • @pratikkundnani
    @pratikkundnani 3 года назад

    What if the code gets stuck in an infinite loop? How does server handle that?

    • @salmanbehen4384
      @salmanbehen4384 3 года назад

      TLE

    • @pratikkundnani
      @pratikkundnani 3 года назад

      @@salmanbehen4384 how the server will know? It'll keep executing it until the code gives some input

    • @salmanbehen4384
      @salmanbehen4384 3 года назад

      @@pratikkundnani You can put up a timer of let's say one second if the code runs longer than that then break off and give the TLE.

  • @LovepreetSingh-ez5cq
    @LovepreetSingh-ez5cq 3 года назад

    The containers are not a good idea and are slow for these. Most websites use basic linux securities and restricitons such as chroot.

  • @kumar_prabhat
    @kumar_prabhat 3 года назад

    what's GKCS tho?

  • @xit
    @xit 2 года назад

    Hmm, I developed similar system as my undgraduate project. Instead of using only one server, I went with 1 main server & 3 language server(js, c, c++). The main server acted as an Orchestration layer for the language servers. These language servers would spin up a container from cached image whenever the user visits the question/coding page. As I wanted to make it real time(testcases/container status/execution), I had to setup a custom socket piping "workflow" to manage connections from the frontend - main server - respective language server. We had to parse every single stdout from the language servers and map them to the text-editor on the frontend + there's the realtime I was talking about.
    It took about a year(started at 7th sem) to finish the product but it was super fun. If anyone wants to see the demo: ruclips.net/video/TC3zW5LGkRI/видео.html

  • @AbirPalDev
    @AbirPalDev 3 года назад

    Hey @Gaurav Sen
    Thank you for explaining this.
    I was just wondering, that won't it be too much latency for user who has just joined the already long queue..
    Or do we need scaling..? Or creating containers for every request whenever it is sent...?

  • @ayushdubey9618
    @ayushdubey9618 3 года назад +1

    When it comes to system design, GKCS is the best

  • @BigBrainCoding
    @BigBrainCoding 3 года назад +3

    nice thumbnail

  • @invisible-fm6lz
    @invisible-fm6lz 3 года назад

    Wow
    Nowadays continues making video

  • @karankanojiya7672
    @karankanojiya7672 3 года назад

    Insane 🤯 ! Great explanation 🙏
    So I am just thinking below are some reasons for submitting the same code and getting sometimes different Memory and Runtime ?
    If Virtualization used :
    A pod can die or be terminated while code execution?

    • @gsb22
      @gsb22 3 года назад

      Even if the pod doesn't die. Even if you are using the same machine for running same code, you would get different time almost all the time because the OS processes always interfere with your code execution.

    • @yashlearnscode5502
      @yashlearnscode5502 3 года назад

      @@gsb22 makes sense, thanks for the explanation.

  • @vishalsharma-bp9zu
    @vishalsharma-bp9zu 3 года назад

    But you are giving your processing power to someone, what if they are using them to mine cryptocurrencies. How do you deal with that? Timeouts can be one thing but still, they can automate it and keep on running them.

    • @gkcs
      @gkcs  3 года назад

      I don't allow any outputs or network calls.
      The output is checked with known test cases. Failure just shows "wrong answer".
      Sending us useless code doesn't benefit the sender in any way.

  • @asurakengan7173
    @asurakengan7173 3 года назад

    Containers don't provide security...

  • @swiftiests2852
    @swiftiests2852 3 года назад

    Sorry, I am not going to unfollow you, Gaurav Sir anyhow!
    Btw Nice T-shirt 😅

  • @2024comingforyou
    @2024comingforyou 3 года назад

    Whats rm - rf ??

    • @yashwantptl7
      @yashwantptl7 3 года назад +1

      rm command is a remove operation. "-r" along with "-f" ( i.e -rf) is used to recursively and forcefully execute "rm" command(remove/delete) over specified folder

  • @dineshshekhawat2021
    @dineshshekhawat2021 3 года назад

    Using docker containers in root less mode for even better security?

  • @vinit__rai
    @vinit__rai 3 года назад

    haha! You are asking for subscribe, but the t-shirt says that, Unfollow.

  • @manas_singh
    @manas_singh 3 года назад

    BTP yahi hai mera