Unlocking your CPU cores in Python (multiprocessing)

Поделиться
HTML-код
  • Опубликовано: 21 сен 2024
  • How to use all your CPU cores in Python?
    Due to the Global Interpreter Lock (GIL) in Python, threads don't really get much use of your CPU cores. Instead, use multiprocessing! Process pools are beginner-friendly but also quite performant in many situations. Don't fall into some of the many traps of multiprocessing though, this video will guide you though it.
    ― mCoding with James Murphy (mcoding.io)
    Source code: github.com/mCo...
    SUPPORT ME ⭐
    ---------------------------------------------------
    Patreon: / mcoding
    Paypal: www.paypal.com...
    Other donations: mcoding.io/donate
    Top patrons and donors: Jameson, Laura M, Vahnekie, Dragos C, Matt R, Casey G, Johan A, John Martin, Jason F, Mutual Information, Neel R
    BE ACTIVE IN MY COMMUNITY 😄
    ---------------------------------------------------
    Discord: / discord
    Github: github.com/mCo...
    Reddit: / mcoding
    Facebook: / james.mcoding
    MUSIC
    ---------------------------------------------------
    It's a sine wave. You can't copyright a pure sine wave right?

Комментарии • 235

  • @tdug1991
    @tdug1991 2 года назад +76

    It's also worth noting that smaller chunk sizes may be better for unpredictably distributed job times, as one runner may randomly grab many expensive jobs, and lock the pool when the rest of the processes finish.
    Great video, as always!

  • @unusedTV
    @unusedTV 2 года назад +18

    Your video is about two years late for me!
    I was working on a heat transfer simulation in Python where we had to compare hundreds of different input configurations. I knew about the GIL and multiprocessing in general outside of Python, but had to figure out myself how to get it to work. Eventually I settled on a multiprocessing pool and it worked wonders, because now we could run 32 simulations in parallel (Threadripper 1950x).
    Quick caveat that I don't hear you mention: a lot of processors have hyperthreading/SMT (intel/amd respectively), showing double the amount of cores in the task manager. In our case we found that spawning a process for each physical core provided better results than using all logical cores.

  • @ArtML
    @ArtML 2 года назад +54

    \o/ Yay! Long waited multiprocessing video! Always appreciate the humor in intros! :D
    Thanks a lot, I am on a path of making parallelization / multiprocessing to become a second nature in my coding - these videos help greatly!
    More topic suggestions:
    - Simple speed-ups using GPUs
    - Panda speedup by Dask - unlocking multiple cores
    - Numba, JAX and the overview of JIT compilers
    - Cython, and the most convenient (easy-to-use) wrappers for C++ implementations
    - All about Pickling, best practices/fastest ways to write picklers for novel objects

    • @ajflink
      @ajflink 2 года назад +1

      And GPU speedups without Nvidia.

  • @michaellin4553
    @michaellin4553 2 года назад +148

    The funny thing is, adding random noise is actually a useful thing to do. It's called dithering, and is used nearly everywhere in signal processing.

    • @tommucke
      @tommucke 2 года назад +16

      You would however apply it to the analog signal at about half the sampling rate in order of getting better results for the digital signal (and smoothen it with a capacitor afterwards). It makes no real sense to add it on the digital side which is the only thing python can do

    • @gamma26
      @gamma26 2 года назад +7

      @@tommucke Unless you're doing image processing and want to achieve that effect I suppose. Pretty niche tho

    • @maxim_ml
      @maxim_ml Год назад +7

      It can be used as data augmentation in training a speech recognition model

    • @louisnemzer6801
      @louisnemzer6801 Год назад +5

      'I'm going to need those sound files with random noise added in my email inbox by five pm' 😅

  • @jakemuff9407
    @jakemuff9407 2 года назад +256

    Great video! Maybe some more "real world" examples would be useful. Knowing that my code *could* be parallelized and actually parallelizing the code are two very different things. I've found that knowledge of multithreading in python does not translate to automatic code speed up. And of course no two problems are the same.

    • @MrTyty527
      @MrTyty527 2 года назад +11

      I think it is more about doing experiments on asyncio/threading/multiprocessing on your own - everyone has different Python use cases

    • @ibrahimaba8966
      @ibrahimaba8966 2 года назад +3

      multithreading is for io bound tasks, i use multiprocessing with zeromq to do some extensive image processing tasks!

    • @chndrl5649
      @chndrl5649 2 года назад +1

      Take crawling as example, it would be a huge time saver if you want crawl multiple words at a time

    • @chndrl5649
      @chndrl5649 2 года назад

      It all depends on how you can split your work.

    • @v0xl
      @v0xl 2 года назад +11

      python is not the right tool for high performance applicatons anyway

  • @lawrencedoliveiro9104
    @lawrencedoliveiro9104 Год назад +11

    5:07 Threading is also useful for turning blocking operations into nonblocking ones. For example, asyncio provides nonblocking calls for reading and writing sockets, but not for the initial socket connection. Simple solution: push that part onto a separate thread.

  • @SpeedingFlare
    @SpeedingFlare 2 года назад +7

    That pool thing is so cool. I like that it spawns as many processes as there are cores available. I wish my work had more CPU bound problems

  • @zemonsim
    @zemonsim Год назад +3

    This video was so helpful ! I recently converted my mass encryption script to use multiprocessing. To encrypt my dataset of 450 Mb of images, it went from an estimated 11 hours to just 10 minutes, doing the work at around 750 Kb per second.

  • @jesuscobos2201
    @jesuscobos2201 Год назад +5

    Love your videos. I usually watch all of them just for fun but this has enabled me to speed up a very heavy optimization for my science stuff. Ty for your dedication. I can ensure that it has real world implications :)

  • @OutlawJackC
    @OutlawJackC 2 года назад +3

    Your explanation of the GIL makes so much more sense than other people :)

  • @lawrencedoliveiro9104
    @lawrencedoliveiro9104 Год назад +14

    9:27 Actually, there is a faster way of sharing data between processes than sending picklable objects over pipes, and that is to use shared memory. Support for this is built into the multiprocessing module. However, you cannot put regular Python objects into shared memory: you have to use objects defined by the ctypes module. These correspond to types defined in C (as the name suggests): primitive types like int or float, also array and struct types are allowed. But avoid absolute pointers.

  • @zlorf_youtube
    @zlorf_youtube 2 года назад +2

    Me learns a new python thing... Starts using it in every fucking uncessary place. Feels good.
    Really good thing to talk about typical pitfalls.

  • @michaelwang3306
    @michaelwang3306 2 года назад +2

    The clearest explanation on this topic that I have ever seen! Really nice!! Thanks for sharing!

  • @etopowertwon
    @etopowertwon 2 года назад +1

    Multiprocessing helped me a lot recently. I had a script that periodically loads lots tons of small XML from netshare, process them and save locally, single thread ran in 30 seconds, multiprocessed ran in about 6 seconds.

  • @daniilorekhov9191
    @daniilorekhov9191 2 года назад +6

    Would love to see a video on managing shared memory in multiprocessing scenarios

  • @superzcreationz2032
    @superzcreationz2032 22 дня назад +1

    This is what I was searching for.....very very very useful video...😊...that's why I subscribed you 😊

  • @HitAndMissLab
    @HitAndMissLab 2 месяца назад

    Thank you for summarising it so wall and in a such intelligible way.

  • @knut-olaihelgesen3608
    @knut-olaihelgesen3608 2 года назад +2

    You are actually the best at advanced python videos! Love them so much

  • @quillaja
    @quillaja 4 месяца назад

    god that's so much easier than what i've been doing writing all the coordination junk around queue

  • @lawrencedoliveiro9104
    @lawrencedoliveiro9104 Год назад +1

    2:07 Remember that “I/O” can also include “waiting for a user to perform an action in a GUI”.

  • @pamdemonia
    @pamdemonia 2 года назад +1

    It's really interesting to see the threading results, (avg ~.2.5sec per file, but only 7.6 sec total). Cool.

  • @unotoli
    @unotoli Год назад

    So well explained. One nice2have thing - quick tip on how to debug (see summary of time2process) most cpu-intensive tasks (functions, like wav transformation in this case).

  • @austingarcia6060
    @austingarcia6060 2 года назад +4

    I was about to do something involving multithreading and this video appeared. Perfect!

  • @eldarmammadov7872
    @eldarmammadov7872 Год назад

    liked the way to speak about all three modules asynchio, threading, multiprocessingin one vidoe

  • @nocturnomedieval
    @nocturnomedieval 2 года назад +13

    This is so good and clear. A must share. BTW, how these techniques relate to the case when you are using numba with option parallel=True?

  • @Lolwutdesu9000
    @Lolwutdesu9000 2 года назад

    While I'm not using multi-threading in my current work, I'll definitely save this video so I can one day return to it!

  • @user-zu1ix3yq2w
    @user-zu1ix3yq2w 2 года назад +1

    i went down a rabbit hole, MP, numba, cython, pypy... The speedup people can get is insane.

    • @nocturnomedieval
      @nocturnomedieval 2 года назад

      Could you please help me to find the answer: numba with option parallel=True how it relatesto cores/threads/process? @D:

  • @codewithjc4617
    @codewithjc4617 2 года назад +3

    This is great content, I’m a big fan of C++ and Python and this is just amazing

  • @HomoSapiensMember
    @HomoSapiensMember 2 года назад

    really appreciate this, struggled understanding differences between map and imap...

  • @POINTS2
    @POINTS2 2 года назад +1

    Yes! Pool is the way to go. Definitely an improvement the threading and allows you to not have worry about the GIL.

    • @jonathandawson3091
      @jonathandawson3091 2 года назад +2

      Not always an improvement. A process costs a lot more overhead as he explained in the video. Other languages don't have the stupid GIL, hope that it's also removed from python someday.

    • @sebastiangudino9377
      @sebastiangudino9377 Год назад +1

      @@jonathandawson3091 It's a safety measure, it'll probably never be removed from python. If you really need "unsafe" threads you could probably just write your threaded function from c and inter-opt it with python. What her that's actually worth it is up you you but a lot of times it is not

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Год назад

      The GIL is an integral part of reference-counting memory management. Getting rid of it completely means moving to Java-style pure garbage collection, where even the simplest of long-running scripts could end up consuming all the memory on your system.
      There is a project called “nogil”, which sets out to loosen some GIL restrictions a bit. That should give some useful speedups, without abandoning the GIL altogether.

  • @flybuy_7983
    @flybuy_7983 Год назад

    THANK YOU MY BROTHER FROM ANOTHER COUNTRY AND ANOTHER FAMILY!!!

  • @walterppk1989
    @walterppk1989 Год назад +1

    Brilliant video. Absolutely flipping gold

  • @matejlinek287
    @matejlinek287 2 года назад

    Wow, finally a mCoding video where I didn't learn anything new :-D Thank you so much James, now I can rest in peace :)

  • @piotradamczyk6740
    @piotradamczyk6740 Год назад

    I was looking for this kind of lessons for years. please do more.

  • @riccardocapellino9078
    @riccardocapellino9078 2 года назад

    I tested this on my old code used for my thesis, which basically performs the same calculation hundreds of times with no I/O (calculates flows in a aircraft engine turbine stage). Took me 10 minutes to adjust the code and made it 40% FASTER

  • @robertbrummayer4908
    @robertbrummayer4908 2 года назад

    Great video! Man, your videos are awesome. And every time I learn a little bit and get a little bit better, like you say :) Best wishes from Austria!

  • @StopBuggingMeGoogleIHateYou
    @StopBuggingMeGoogleIHateYou Год назад

    Great video. I was not aware of that module! As it happens, I've spent the last six weeks writing something that can run thousands of processes and aggregate the results. I'm not going to throw it away after watching this video, but I will ponder how I might've designed it differently had I known.

  • @dlf_uk
    @dlf_uk 2 года назад +24

    What are the benefits/drawbacks of this approach vs using concurrent.futures?

    • @bersi3306
      @bersi3306 2 года назад +4

      Answer reside in the difference between concurrency and parallelism.
      When to use them also makes a lot of difference (here "CPU bounds" problems to solve with parallelism vs "I/O bounds" problems to solve with concurrency).
      You should also check (in the concurrent side) the difference between a Threaded function vs a coroutine.

  • @akalamian
    @akalamian Год назад

    Great teaching, simple and effective, I've using this Multiprocessing with my coroutin, my program is flying, lol

  • @EvanBurnetteMusic
    @EvanBurnetteMusic 2 года назад +3

    This is great! Thanks! Would love a guide on how to use shared memory with multiprocess. I've been optimizing a wordle solver that looks for five words with 25 unique letters as in the recent Stand Up Maths video. On my 8 core machine, each subprocess ends up using half a gig of memory! My data structure is a list of variable length sets. With pool I have to resort to pool.starmap(func, zip(argList1, argList2)) to pass all the data I need into each subprocess. Compared with my naive manual multiprocess implementation, the mp pool version is 30% slower. I'm hoping it can be faster with shared memory. Again, I really appreciate that you created an almost real world problem to demonstrate multiprocessing. It gave me the context I needed to implement this with my program.

    • @volbla
      @volbla 2 года назад +2

      I tried using multiprocessing on my prime number sieve where each process have to write to the same array. It didn't really end up being faster (i'm probably bottlenecked by ram speed), but i did get the shared memory to work with numpy arrays. In your main process you do:
      shared_mem = SharedMemory(name = "John", create = True, size = #bytes)
      an_array = np.ndarray((#elements,), dtype = #type, buffer = shared_mem.buf)
      # Put your data in the array
      And in each subprocess you reference the memory by basically doing the same thing again.
      shared_mem = SharedMemory(name = "John")
      an_array = np.ndarray((#elements,), dtype = #type, buffer = shared_mem.buf)
      # Do something with the data
      In this case it was also useful to pass the process inputs through a Queue rather than function arguments. Then they only have to be instantiated once, even when consuming a lot of unpredictable data.

    • @EvanBurnetteMusic
      @EvanBurnetteMusic 2 года назад +1

      @@volbla Thanks for the queue tip I will definitely be trying that out!

  • @SkyFly19853
    @SkyFly19853 2 года назад +1

    Very useful for video game development.

  • @shikutoai
    @shikutoai Год назад

    Me: Audio Engineer
    You: 'Let's add random noise to a bunch of audio files'
    Me: AOWERIGLHAOLEIRGHAOPEIRGHSOLIERHGNBapoiewhftgliakrhegaopilerhgsireghsSGrfegplihjsoigjhsdfgoiHJPOIGHSOIDFHGSAPODRJfg
    Edit: Thanks for the intro to multiprocessing in Python

  • @mingyi456
    @mingyi456 2 года назад +2

    Please make a video about pickable objects and pickling, I would like to know more about it.

  • @imnotkentiy
    @imnotkentiy 2 года назад +1

    -It is the end
    -ha. All this time i've only been using 1/16 of my true power, behold
    -nani?!

  • @mme725
    @mme725 2 года назад

    Nice, might play with this when I get off work later!

  • @m0Ray79
    @m0Ray79 2 года назад +1

    And don't forget that pure Python is not an only option. Pyrex, which is translated to C/C++, opens even more broad bridges towards performance.

    • @ДмитроПрищепа-д3я
      @ДмитроПрищепа-д3я 2 года назад

      Why use Pyrex when there's Cython tho?

    • @m0Ray79
      @m0Ray79 2 года назад

      @@ДмитроПрищепа-д3я Pyrex is a python language superset. Cython is its translator. I metioned it in my videos.

    • @ДмитроПрищепа-д3я
      @ДмитроПрищепа-д3я 2 года назад

      @@m0Ray79 Cython is also a python superset tho. And no, Cython isn't a translator for Pyrex, it's a separate thing that was influenced by Pyrex back then.
      And Pyrex is kinda dead with its last stable release being 12 years old.

    • @m0Ray79
      @m0Ray79 2 года назад

      ​@@ДмитроПрищепа-д3я The syntax and the whole idea was introduced in Pyrex, I'm still calling it the old name. Ok, let's say Pyrex became Cython. And the file extension is still .pyx.

  • @goowatch
    @goowatch 2 года назад +1

    You should preferably use per-core display to better show what you want to explain. Thanks for sharing your experience.

  • @saketkr
    @saketkr 2 месяца назад

    Awesome! This was so so helpful. Could you also make one about all these, i.e, asyncio, multi-threading, multi-processing and then workers, please?

  • @jdsahr
    @jdsahr Год назад +1

    This was just absolutely fantastic. I'm using this for processing radar data (numpy is involved), and the speedup is great!
    But because "experience is a dear school, and a fool will have no other" I did spend several hours banging my head against the following:
    >with Pool(8) as p:
    > print( p.map(my_etl_func, a_list_of_filenames) )
    This works fine, but if you replace p.map() with p.imap(), then the print() statement prints out an address of some kind of iterator. The same thing happens with p.imap_unordered(), of course.
    The issue is that p.map() returns a conventional list, but p.imap() and p.imap_unordered() return an iterator.
    You can print(list_thing) and something useful happens, but when you print(an_iterator_thing) you get gobbledegook that isn't useful.
    It took me hours to figure out what was going on; hopefully that is hours that no one else has to spend. But, I have to admit that I probably learned more than those who will benefit from my folly.
    ----
    For those who care about large binary datasets, I recommend HDF5 / h5py.

    • @mCoding
      @mCoding  Год назад

      Great to hear you are getting value out of multiprocessing! Yes this is a common thing in Python where many iterables are lazy (like the builtin map). They return an iterator and if you really want a list just call list on them.

  • @cjsfriend2
    @cjsfriend2 2 года назад +1

    You should do a video on using logging alongside with the multiprocessing pool

  • @JohnZakaria
    @JohnZakaria 2 года назад +10

    If numpy / scipy do the computations in C land, why don't they release the GIL and aquire it back when the computation is done?
    When writing a C++ module using pybind11, you have the option to release the Gil, granted that you are doing pure C++.

    • @julius333333
      @julius333333 2 года назад +3

      pretty sure it does

    • @JohnZakaria
      @JohnZakaria 2 года назад +1

      @@julius333333 if it did, then threads would speed up the computation.
      Just like i/o calls that do release the GIL

    • @jheins3
      @jheins3 2 года назад

      Not an expert but far and based on your comment, you probably know 100x more than I do.
      With that being said I am going to speculate that the traditional behavior of numpy/scify follows a standard api call to an external C/C++ optimized library (a dll in windows). The API is essentially a function that initiates the c-land magic. For error handling and for how the GIL works, the function call waits to receive the output from c-land before handing it back. Because the API is essentially a function call, the GIL cannot be released till the function returns.
      Again that's a guess.

  • @iUnro
    @iUnro 2 года назад +11

    Hello. Can you explain what is the difference between multiprocessing and concurrent futures package? For me they look the same so I wonder why did you chose one over another.

  • @volundr
    @volundr 2 года назад +1

    This is very useful, thank you

  • @tobiasbergkvist4520
    @tobiasbergkvist4520 2 года назад +4

    On Linux/macOS you can use the fork-syscall to "send" things that can't be pickled, but only when using `Process`, and not when using `Pool`, since the process needs to get all the unpickleable data at startup, and can't receive it after it has started.
    The child processes inherits the parents memory with copy-on-write when using `fork`, meaning it only creates a copy of the memory if an attempt to modify it is made.

  • @talhaibnemahmud
    @talhaibnemahmud 2 года назад +3

    Much needed video.
    I recently had to use multiprocessing for Image Processing & AI Game Assignment at the university.
    Although I used concurrent.futures.ProcessPoolExecutor() ,
    this seems like a good option too.
    Maybe a comparison between these different options? 🤔

  • @unperrier5998
    @unperrier5998 2 года назад +1

    Can't wait for PEP 554 multiple interpreters to be mainline.

  • @joshuaowen1941
    @joshuaowen1941 2 года назад

    I love your videos man! Absolutely love them!

  • @anon_y_mousse
    @anon_y_mousse 2 года назад +6

    Once upon a time, multi-processing required multiple full CPU's, so it's a very understandable speako. It might also show your age. Although, it might make for an interesting video to make a Beowulf cluster with RPi's and show how to program it to calculate something in parallel. Pi itself is obvious and easy, but perhaps how to do video encoding or 3D scene rendering would be a great fit.

    • @harrytsang1501
      @harrytsang1501 2 года назад +1

      The best way to talk about multiprocessing and task scheduling is with RTOS. The important parts are in some 2000 lines of C and it's amazing for embedded systems

    • @anon_y_mousse
      @anon_y_mousse 2 года назад

      @@harrytsang1501 It might be pretty cool if he did a whole video series showing beginner methods in one and more advanced methods in an other. Using RPi OS with Python for the beginner series, RTOS and C for the more advanced.

  • @Roule_n_Scratche
    @Roule_n_Scratche 2 года назад +5

    Hey mCoding, could you make an video about Cython?

  • @emilfilipov169
    @emilfilipov169 Год назад

    OMG you used start and end time as part of the code?!?!?!
    In the meantime i get rejected on an interview because i didn't know how to write a decorator to do that same task.

  • @imbesrs
    @imbesrs 2 года назад

    You are the only person i keep video notis on for

  • @codedinfortran
    @codedinfortran 10 месяцев назад

    thank you. This made it all very clear.

  • @MihaiNicaMath
    @MihaiNicaMath 2 года назад +2

    What is this, a CPU monitor window for ants? It needs to be at least 3 times as big! Joking aside, I enjoyed the video and learned something! The pitfalls are especially helpful. Thank you :)

  • @mostafaomar5441
    @mostafaomar5441 7 месяцев назад

    Very useful. Thank you so much.

  • @neelroshania7116
    @neelroshania7116 2 года назад

    This was awesome, thank you!

  • @Darios2013
    @Darios2013 Год назад

    Thank you for great explanation

  • @Malins2000
    @Malins2000 2 года назад +1

    Great Vid!
    When I learned to use mp was via Process object.
    Latest application was training TF models on GPU. I got some optimizing algorythm that searches for best Hyperparameters on models. Calculations for next parameter set to check take some time (after 50 points takes a lot of time tbh - longer then model training). So I created mp.Process() objects that deals with parameter search, and then communicates (via mp.Pipe() ) to process that builds and trains models on GPU (to avoid multiple processes access hardware the same time). Usage of mp.Queue helps with communication ;) It works great! keeps both GPU and CPU cores busy all the time :D
    but I've never had to use Pool though :P
    So mp.Process is closer to me :D

  • @maheshcharyindrakanti8544
    @maheshcharyindrakanti8544 Год назад

    took me a while due to mistake, but it works thanks

  • @AntonioZL
    @AntonioZL 2 года назад

    Very useful. Thanks!

  • @rohitathithya3964
    @rohitathithya3964 4 месяца назад

    @7:59 bruh!
    and slapping the like, odd number of times , wow

  • @lawrencedoliveiro9104
    @lawrencedoliveiro9104 Год назад

    2:49 Re “I say "CPU" a bunch but i actually mean "core"” --- remember that the term “core” for “CPU” was coined by Intel (and possibly other chipmakers) when they started putting the circuitry for multiple CPUs onto a single chip. The distinction isn’t really important, except that some proprietary server software from that time had licence fees that were calculated per-CPU, but somehow this was relaxed into “per-CPU-chip slot”. This way, if you had multiple CPUs in one chip, you didn’t have to pay as much as if the chips were in separate slots (which was quite common in servers in those days).
    Why did it matter? I guess to prevent a revolt by customers angry over licence fees ...

  • @felixfourcolor
    @felixfourcolor Год назад

    More videos on threading/asyncio please 😊

  • @joshinils
    @joshinils 2 года назад +2

    A video on how to figure out which pieces take the most time and optimizing for time would be great.
    what profilers are there for python, how do i use them, how do i use them right?

    • @peterfisher3161
      @peterfisher3161 2 года назад +1

      "what profilers are there for python" Spyder and PyCharm have built in profilers.

    • @joshinils
      @joshinils 2 года назад +1

      @@peterfisher3161 ah, so I'd have to use those IDEs, not VS code... ok
      I'd rather have some cli solution or one that works with vs code.

    • @replicaacliper
      @replicaacliper 2 года назад +1

      Scalene is an amazing profiler especially on Linux

    • @peterfisher3161
      @peterfisher3161 2 года назад +1

      @@joshinils Quickly looking up I found cProfile, which is a built-in and can be used from the terminal. Not much popped up on VS code.

    • @jbusa5dimvzgkiik
      @jbusa5dimvzgkiik 2 года назад +2

      I've found yappi + gprof2dot to be really useful to find where asyncio applications are spending the CPU time.

  • @plays1361
    @plays1361 Год назад

    Great video, the program works great

  • @ArgumentumAdHominem
    @ArgumentumAdHominem 8 месяцев назад

    Great video. It would be super nice to have a worked example showing strong scaling. I have found that python multiprocessing is really not that great in terms of performance. Imagine the basic scenario: you need to compute a very expensive function of index 'i' between 0 and 1000. The function takes the same time for each index. There are no shared resources between processes. Naively one would expect the performance to scale as the inverse of the number of cores, but it is actually significantly worse, and I still don't fully understand why.

  • @SalmanKHAN.01
    @SalmanKHAN.01 Год назад

    Thank you!

  • @percythemagicpenguin
    @percythemagicpenguin 2 года назад

    I'm kicking myself for not learning this stuff earlier.

  • @tanveermahmood9422
    @tanveermahmood9422 Год назад

    Thanks bro.

  • @pranker199171
    @pranker199171 2 года назад +1

    Please do some more real world examples this is amazing

  • @pedroarthurstudart1999
    @pedroarthurstudart1999 3 месяца назад

    5:42-5:43 and a reference to Mincraft damage taking.

  • @s7gaming767
    @s7gaming767 Год назад

    This helped a lot thank you

  • @cute_duck69x3
    @cute_duck69x3 Год назад

    Awesome voice and helpfull video 😍

  • @slava6105
    @slava6105 2 года назад

    Great! Now I can program your coffee machine on python to make me more than just 1 cup at a time

  • @dinushkam2444
    @dinushkam2444 2 года назад

    Great video
    Very interesting stuff

  • @aleale550
    @aleale550 2 года назад

    Great video! You could do a follow up parallel computing video using Dask?

  • @hicoop
    @hicoop 2 года назад

    Such a good video!

  • @anihilat
    @anihilat 2 года назад

    Great Video!

  • @renancatan
    @renancatan 2 года назад

    very nice!
    Just a hint, you know so much about classes, functions, etc
    Why not make an OOP for beginners?
    Much beginners/interdemediary still struggle with the most basic expressions from classes..

  • @Kamel419
    @Kamel419 2 года назад

    I had to solve a complex problem similar to this and ended up needing to use a specific sequence of queues and workers to solve it. I think I ended up with 6 total workers, each with a "parent" worker flowing into it. I think it would be neat to showcase something like this

  • @nikolastamenkovic7069
    @nikolastamenkovic7069 2 года назад

    Great one

  • @jaimedpcaus1
    @jaimedpcaus1 2 года назад

    This was a great Vid. 😊

  • @angmathew4377
    @angmathew4377 2 года назад

    Beautifull just remebered me c and c# world.

  • @ali-om4uv
    @ali-om4uv 2 года назад +1

    It would be great if you could show if this can be used for Ml hyperparamerer tuning and other Ml tasks.

  • @steinnhauser3599
    @steinnhauser3599 2 года назад

    Awesome!

  • @aadithyavarma
    @aadithyavarma 2 года назад +1

    Doesn't Python use pass by reference instead of pass by value, so does passing an large object to a method really matter here?

    • @ДмитроПрищепа-д3я
      @ДмитроПрищепа-д3я 2 года назад +6

      That's true, but here we pass that to another process, which happens by value (well, almost, it's pickled, passed as a binary data and then unpickled inside of another process).

    • @aadithyavarma
      @aadithyavarma 2 года назад +1

      @@ДмитроПрищепа-д3я Since individual processes don't share memory, the data needs to be copied for each process. That's makes sense. Thanks!

  • @mahmoudshihab
    @mahmoudshihab Год назад

    I didn't quite understand pitfall number 3, when you showed:
    `items = [np.random.normal(size=10000) for _ in range(1000)] `
    Why is this a pitfall?
    Also, for the fib demonstration...
    For some reason, fib took 1.35s vs nfib took 35.05s
    Even the normal implementation took less time than multiprocessing at 12.93s
    I even copied the fib and n_fib from your github to ensure that I wasn't doing something wrong
    But I can't seem to replicate your results

  • @chrysos
    @chrysos 2 года назад

    I feel more than just informed

  • @triola_3
    @triola_3 Год назад

    5:43 that sounded like the minecraft oof

  • @nathanthreeleaf4534
    @nathanthreeleaf4534 2 месяца назад

    What if instead of completing a process in each pool, the data that is returned from each "pool" needs to be stored somehow?

  • @ewerybody
    @ewerybody 9 месяцев назад

    This is cool and all for relatively small python scripts. What if I have a UI (maybe Qt for Python) and want to kick off some work on a pool of processes. I wouldn't want these processes to load (or even execute) any of the UI code 🤔