The JVM Meets WASI: Writing Cloud-Friendly Wasm Apps Using Java and Friends - Joel Dice

Поделиться
HTML-код
  • Опубликовано: 18 окт 2024

Комментарии • 11

  • @SumanGantaMr
    @SumanGantaMr Год назад

    This is great talk @Joel. Do you have any good use cases around finer grained sandboxing that apply to common functions or micro services? Typically, once the sandbox perimeter is defined, it applies to all parts of the system. I'm curious about your example of per request sandboxing. I can see that, depending on the request, you can apply different sandboxing, but I didn't get any compelling real world use cases for me to rally behind this kind of virtual machine as an abstraction.

  • @suikast420
    @suikast420 Год назад

    I am a senior java backend developer. I want understand two points. 1. You are talking about Mixing JVM friendly and JVM - awkward languages. Thre is the GralVM for this purpuse am I wrong? 2. Sanboxing can be done within docker or not ? In case of log4j shell espacially. Where is more sanboxing with WASI ?

    • @223tt322
      @223tt322 Год назад

      1. Yes -- GraalVM is a great option for mixing JVM and non-JVM languages. I think the WASI Component Model will be another great option when it's ready.
      2. A Docker container can be used to isolate a server process from the rest of the host system, but doesn't help with fine-grained isolation, e.g. on a request-by-request basis. With Wasm, we can guarantee that each incoming request gets its own sandbox, completely isolated from other requests. Docker also doesn't have a lightweight mechanism for isolating dependencies; you can split them into separate containers which communicate via RPC, but that requires additional orchestration, which might not be worth the effort.
      The WASI Component Model can help mitigate or prevent problems like Log4Shell by making it easy to isolate a dependency (like a log framework) as a sub-component which can be sandboxed independently of the rest of the app (e.g. no networking at all, or just networking to a whitelist of allowed hosts). Also, since TeaVM does not support dynamic class loading, Log4Shell-style remote execution is not possible in any case.

    • @suikast420
      @suikast420 Год назад

      @@223tt322 1. Correct me if i am wrong. WASI would provide an abstract interface definiton so that I can integrate lang1 with lang2. So IDL like approach right?
      2. If I can seperate every request in It's sanbox whould every request spawn a single process

    • @223tt322
      @223tt322 Год назад +1

      @@suikast420 1. Yes, and the IDL is called WIT (WebAssembly Interface Types). I would provide a link to the specification, but RUclips seems to think I'm spamming when I do that; search the web for "wit" and "component model" and you'll find it. Note that it is still being actively developed, so it's not yet stable.
      2. Per-request sandboxing with Wasm can be achieved without spawning OS processes. Instead, the Wasm runtime statically verifies that the bytecode is well-behaved (e.g. doesn't try to access non-existent local variables) and runs each instance in its own memory space, trapping out-of-bounds accesses.
      Per-request sandboxing can also be done with e.g. Docker, but that _does_ require spawning a separate OS process for each request, which is generally more expensive than what a Wasm runtime does.

    • @suikast420
      @suikast420 Год назад

      @@223tt322 That is very interesting. Thanks a lot for that info

  • @223tt322
    @223tt322 Год назад +2

    Here are the slides with speaker notes, for reference: docs.google.com/presentation/d/1YY5MdCz1g3ONH_-uSPRxAfvLtyhA06rp/edit

  • @suikast420
    @suikast420 Год назад

    My mind is little blown up. I realy don't understand the prupose. Let's aussume I am using Rust or Go where I have an executable binary. Then I compile it to a bytecode lang and that should be faster then native go or rust app? DoI missing something ?

    • @223tt322
      @223tt322 Год назад

      Rust or Go compiled to Wasm will generally _not_ be faster than native. The Wasm version will probably be slightly slower, depending on which runtime you use. However, the Wasm version will be portable (i.e. run on any OS and architecture) and provide fine-grained, efficient isolation. The best choice depends on which combination of those features (performance, portability, and isolation) are most important to you.

    • @suikast420
      @suikast420 Год назад

      @@223tt322 Interesting. So the only reason is platform independency and strong workload isolation right ?
      it remains exciting to observe the progress in wasm world.

    • @223tt322
      @223tt322 Год назад +1

      @@suikast420 Yes, platform independence and strong isolation are the main benefits right now. I'm also excited about the Component Model's potential to enable a language-agnostic ecosystem for easy library reuse across language boundaries. For details on that, see Luke Wagner's Wasm Day keynote (ruclips.net/video/phodPLY8zNE/видео.html), along with Bailey and Kyle's SIG-Registries lightning talk (ruclips.net/video/lihQEVhOR58/видео.html).