If you are coding for problems that have significant computational times, multithreading is a must! As a quick refresher multithreading is an industry practice that has been growing in popularity over the past ten years, and for a good reason! In complicated tech jargon, multithreading is the ability of a central processing unit to provide multiple threads of execution concurrently, supported by the operating system.

As a useful translation of what this means in everyday speak, we can use a border crossing checkpoint as an example. However, before I dive into the analogy, we need to establish some basic computer terms that are relevant to running lines of code. The CPU of a computer is essentially the brain, running all the applications of your computer.

No alt text provided for this image

Now most CPUs in computers these days will be made up of at least four cores. Each core is responsible for running a single thread of code (this is not exactly accurate but for the sake’s of example let’s say it is). A thread is best explained with the image below.

No alt text provided for this image

This is a single thread. Now, the default process of running code is a sequential process. Meaning that the section of code before must be executed or ‘run’ before moving on to the next section. This is perfectly ok if your project has very few lines of code; however, if you have thousands or hundreds of thousands of lines, this can lead to some pretty insane computational times!

Now, back to the metaphor. Let’s say that the CPU is the border crossing between Brisbane and NSW. Let’s also say that for some reason (perhaps a current pandemic) the borders have been closed except for a single location. Now let’s assume the government have set up checkpoint windows at this location to stop incoming residents and need to check each resident for any flu-like symptoms (bugs).

Let’s pretend each checkpoint window is a core of the CPU and the residents trying to move through represent a line of code. If it is a particularly quiet week and only a few hundred people are passing through, the border officials may only require one window to funnel residents through. There may only be a 1-2 minute wait time for drivers to be checked and allowed to pass.

Now let’s say a big event is taking place in Queensland and a few hundred thousand NSW residents want to go. Remember, each one needs to be checked before they can cross the border. Now, if we had a single window open, checking each individual, you can only imagine the time it would take to move through the masses!

No alt text provided for this image

But hang on, we have three spare windows, right? Correct! So instead of trying to funnel all these people through a single checkpoint, we decide to open the other three, so we can begin testing four sets of people concurrently, dramatically decreasing the wait time.

This is a rather crude example of how multi-level threading works when programming.

Let’s say we have one section of code that will take ten minutes to run and three more parts after that which also take ten minutes. If we are using a single core, we need to wait 40 minutes before we can run and proof each section. It is a one after another approach.

No alt text provided for this image

However, if we set up our code so that each of these sections is operating on a different core, we can run all sections concurrently, saving you 30 minutes!

 

No alt text provided for this image

This is an example of an industry practice SAPHI engineering has implemented to reduce downtime for clients and deliver high-quality work fast. If you enjoyed this post on multithreading in Python, please support us by following our SAPHI engineering page on LinkedIn. you might also enjoy our post about GitHub’s big announcement here.

Thank you for reading and as always, do not be afraid to reach out to us on LinkedIn or via our contact page here, we would love to hear from you!