Overview of Forks, Threads, and Asynchronous I/O

Applications that handle multiple connections can be built in one of three ways; by forking processes, using muliple threads, or using asynchronous I/O.

 

FORKING PROCESSES

Before getting started, lets define a 'process' simply as an instance of an application running on your system. Forking a process means that you duplicate it from the current point of execution. So when the new (child) process is created it will have the same state as it's parent at the time it was forked. Unlike threads, each forked process gets it's own memory space to which the state information is copied (which is why processes are thread safe). Once the process is forked, the new child process may go about it's own execution path that is separate from it's parent. The two processes can run simultaneously (in parallel). Forking allows a server to handle multiple concurrent connections. For an example, Apache web server can be configured to fork processes, in which there is a main process that waits for requests from remote machines to connect to the server. When a remote machine requests a connection, Apache will fork a process to handle the request. The main process then continues to listen for other requests. Note that forking applies to Unix/Linux machines. MS Windows does not fork processes. Note that processes are sometimes referred to as 'tasks'.

 

THREADS

There are two other ways that servers can handle multiple connections. One is by using threads. Threads are like sub-processes, but the major difference between a thread and a process is that all threads created by an application share the same state and memory space (remember that when you fork a new process it gets its own copy of the application state). Obviously, creating new threads requires less memory than forking new processes, but because all threads share the same application data, you may run into issues when two threads try to update a variable in an application. When using threads you must be sure to keep your code synchronized so that two threads don't try to change a variable that might cause conflicts.

 

ASYNCHRONOUS I/O

Finally, a server could use asynchronous I/O. This approach uses a single process that does not create new processes or threads. In this case the process runs an event loop that listens for connections. When a new connection is created the event loop adds it to a queue. The event loop continually cycles through the queue to see if any clients are requesting data. The queue may also contain other code (not related to clients, just other stuff that the app needs to do). When the event loop finishes running some code, it removes it from the queue and moves to the next item in the queue. Note that asynchronous I/O has an advantage over forking because it does not use up as much memory (remember that processes get a copy of the application's state, which requires memory). Async I/0 does not use threads either, so you don't have the worry about dealing with synchronizing access to the variables in your app. The drawback to asynchronous I/0 is that when each bit of code is run from the queue, everything blocks until it that piece of code completes. So if your app is in the middle of doing some intensive code that takes time to run, connections will not be allowed until that code completes.

 

On a related note, creating a new thread or new process is called 'context switching', which is managed by the kernal of the OS. Here is a snippet about context switching from this url: http://www.linfo.org/context_switch.html

 

A context switch (also sometimes referred to as a process switch or a task switch) is the switching of the CPU (central processing unit) from one process or thread to another.

A process (also sometimes referred to as a task) is an executing (i.e., running) instance of a program. In Linux, threads are lightweight processes that can run in parallel and share an address space (i.e., a range of memory locations) and other resources with their parent processes (i.e., the processes that created them).

A context is the contents of a CPU's registers and program counter at any point in time. A register is a small amount of very fast memory inside of a CPU (as opposed to the slower RAM main memory outside of the CPU) that is used to speed the execution of computer programs by providing quick access to commonly used values, generally those in the midst of a calculation. A program counter is a specialized register that indicates the position of the CPU in its instruction sequence and which holds either the address of the instruction being executed or the address of the next instruction to be executed, depending on the specific system.

Context switching can be described in slightly more detail as the kernel (i.e., the core of the operating system) performing the following activities with regard to processes (including threads) on the CPU: (1) suspending the progression of one process and storing the CPU's state (i.e., the context) for that process somewhere in memory, (2) retrieving the context of the next process from memory and restoring it in the CPU's registers and (3) returning to the location indicated by the program counter (i.e., returning to the line of code at which the process was interrupted) in order to resume the process.

 

 

Please post a comment if you have anything to add.