Optimization is a chief concern among computer scientists. Solving questions on how to make computers run more efficiently and quickly is constantly being pushed to new levels by consumer demand. One method of reducing an application’s run-time is through the use of threads to distribute a task of a process across multiple threads which can then occur at an almost simultaneous rate. This allows the computer to accomplish more in a given amount of time or with a given expenditure of energy (similar to the principle of applying more tackle blocks to a pulley system, as illustrated above), essentially a form of multitasking for processors.
Program – The collected set of written instructions which will tell the computer what to do when called upon. Programs are static, roughly equivalent to a genome.
Process – An individual portion of the program, containing a specific set of instructions, which can be called by the computer, decoded by the processor and implemented.
Context – Made up of multiple pieces of information, the context of a program provides a comprehensive description of a process’ phases of execution and in which phase a particular process is.
Thread – A set of instructions for the computer within a process. A thread can be thought of as a pared-down process which can then be nested within a process.
If a process is single-threaded, then the computer can only call one thread when it runs the entire process. Multi-threaded processes allow for more than one thread to be called during the running of a single process, the benefits of which are described below.
Bound threads are dependent on their assigned lightweight processes to determine whether or not they can run. Unbound threads are autonomous and are not assigned to a specific lightweight process, allowing for relative autonomy in how it can receive address space in which to operate.
If a thread is defined as User-level or Application-level, then the user space is the domain which manages the thread’s library routines, rather than the kernel.
Lightweight Process – A special type of thread which operates within the kernel, executing code on the kernel level and also performing system calls.
Concurrency – When two or more processes are in progress at the same time as as opposed to parallelism, during which two or more processes are being executed at the same time.
Given a database with multiple entries in respective fields, a programmer may have to sort information according to a set of criteria. The programmer could write a single program to be executed from beginning to end of one column then jump to the next column and execute and so on. However, to make it more efficient, the programmer can create multiple subprocesses which can execute the sorting command through each column, all at virtually the same time. This is the efficiency which multithreading allows. Multithreaded systems have been around since the 1950s, though the actual principles of multithreaded processing did not start to be incorporated in a more mainstream fashion into personal computers until later in the 1990s and early 2000s with multiprocessing the more popular option before. With the addition of multithreading to the computer’s capabilities, computers were able to accomplish what the general users had thought they were doing. Programmers continue to examine the possibilities of multithreaded systems, with the potential for millions of threads within a single process being explored in the realm of high-end computing.
How it works
Originally, the relationship between thread and process was 1:1. As programming became more sophisticated, the ability to insert multiple threads into a single process became a possibility, which allowed for a cheap alternative to continually filling up address space with processes whose functions were all directed at the same goal. Within a process, the programmer must designate the thr_create command. Parameters such as stack address, byte size, and others can be added as well to specify how the created thread interacts with other threads in the process. In this way, multiple threads can be created to operate within a process. Once the process is called by the computer, threads will be designated by their lightweight processes to a processor which will execute the thread’s command. Consideration should be taken for how single-thread and multi-thread processes will interact within the context of their being called and executed by the computer, as the run-time system’s synchronization costs will be the same regardless, causing programmers to recommend that separate systems be used to handle single-thread and multi-thread applications.
- Solaris Troubleshooting and Performance Tuning – Explanation of terminology and the idea of the process within a Unix environment.
- Computer Programming Terms & Definitions – Additional terms to consider when learning about, creating, or implementing computer programs
- Glossary of Computer Terms – Created by the Academic Computing and Communications Center at the University of Illinois Chicago. Sections are A-C, D-H, I-N, O-S, and T-Z
- Multithreaded Applications – Provides an explanation of multithreaded programming and shows examples of code used to implement multithreaded systems.
- Multithreaded Programming with ThreadMentor: A Tutorial – How to use the ThreadMentor system to create multithreaded programs
- A Case Study of Multi-Threading in the Embedded Space – Study on how to implement multithreaded processes in environments such as microcontrollers and miniaturized systems.
- Optimization Principles and Application Performance Evaluation of a Multithreaded GPU Using CUDA – Paper on the improvements accomplished in computer graphics hardware using multithreaded processes
Picture Credit: Six-Pulleys, Wikimedia Commons, Mike Malak, 2006