How to create multiple deployments from one project in Prefect

Поделиться
HTML-код
  • Опубликовано: 7 сен 2024
  • Learn how to create multiple deployments from one project in Prefect.
    Get the code here: prefec.tv/420xMZP
    Video Playlist
    -----------------------
    What is Prefect: • What is Prefect?
    Getting Started with Prefect Cloud: • Getting Started with P...
    Connect with Us
    -----------------------
    Website: www.prefect.io/
    Read the docs: docs.prefect.i...
    GitHub: github.com/Pre...
    Connect with us on LinkedIn: / prefect
    And Twitter: / prefectio
    Subscribe: / @prefectio
    Happy engineering!
    Tags: data orchestration, data engineering, data engineering projects, data tools, data education, data stack
  • НаукаНаука

Комментарии • 5

  • @w.e.b_b
    @w.e.b_b 8 месяцев назад +1

    The problem with this approach is that now your parameters are static. How can I have multiple deployments in a single github repository where the params are dynamic?

  • @PatrickSteil
    @PatrickSteil 9 месяцев назад

    Sweet. So what is happening under the covers when you “prefect deploy” in this case with one worker and two entry points? Is there one process to execute both (so they can’t run simultaneously or ?

    • @KevinGrismorePrefect
      @KevinGrismorePrefect 9 месяцев назад

      Regardless of which worker type you choose, flow runs started from deployments run independently, so there's no restriction on the quantity of entrypoints. When that independent flow run starts, it'll just look for the entrypoint function in the specified path and execute it if found.

    • @PatrickSteil
      @PatrickSteil 9 месяцев назад

      @@KevinGrismorePrefect Thx!
      Will a worker spawn multiple simultaneous processes if two entry points are executed simultaneously?
      How do you allow/restrict this?

    • @KevinGrismorePrefect
      @KevinGrismorePrefect 9 месяцев назад

      @@PatrickSteil Yep, if on a process worker, it'll run multiple subprocesses. You can constrain the number of concurrently running flows on a work pool or one of the work queues within the pool using its concurrency limits. If you were to reach a point where you need more concurrent compute resources than a process worker can provide, switching to a worker type that utilizes dynamically scaling infrastrucutre, like Kubernetes, ECS, Cloud Run, or ACI, is recommended.