top of page
Search
tinesstoleadenlo

Programming With Posix Threads Butenhof Pdf 12: Explore the Benefits and Costs of Threading with Rea



In this authoritative work, Linux programming expert Michael Kerrisk provides detailed descriptions of the system calls and library functions that you need in order to master the craft of system programming, and accompanies his explanations with clear, complete example programs.




Programming With Posix Threads Butenhof Pdf 12




Michael Kerrisk has been using and programming UNIX systems for more than 20 years, and has taught many week-long courses on UNIX system programming. Since 2004, he has maintained the man-pages project ( -pages/), which produces the manual pages describing the Linux kernel and glibc programming APIs. He has written or co-written more than 250 of the manual pages and is actively involved in the testing and design review of new Linux kernel-userspace interfaces. Michael lives with his family in Munich, Germany.


Parallel patterns are a high-level programming paradigm that enables non-experts in parallelism to develop structured parallel programs that are maintainable, adaptive, and portable whilst achieving good performance on a variety of parallel systems. However, there still exists a large base of legacy-parallel code developed using ad-hoc methods and incorporating low-level parallel/concurrency libraries such as pthreads without any parallel patterns in the fundamental design. This code would benefit from being restructured and rewritten into pattern-based code. However, the process of rewriting the code is laborious and error-prone, due to typical concurrency and pthreading code being closely intertwined throughout the business logic of the program. In this paper, we present a new software restoration methodology, to transform legacy-parallel programs implemented using pthreads into structured farm and pipeline patterned equivalents. We demonstrate our restoration technique on a number of benchmarks, allowing the introduction of patterned farm and pipeline parallelism in the resulting code; we record improvements in cyclomatic complexity and speedups on a number of representative benchmarks.


Parallel patterns are a well-established high-level parallel programming model for producing portable, maintainable, adaptive, and efficient parallel code. They have been endorsed by some of the biggest IT companies, such as Intel and Microsoft, who have developed their own parallel pattern libraries; e.g. Intel TBB [35] and Microsoft PPL. A standard way to use these libraries is to start with a sequential code base, identifying in it the portions of code that are amenable to parallelisation, together with the exact parallel pattern to be applied. Proceeding with instantiating the identified pattern at the identified location in the code, after possibly restructuring the code to accommodate the parallelism. Sequential code therefore gives the cleanest starting point for introduction of parallel patterns. There exists, however, a large base of legacy code that was parallelised using lower-level, mostly ad-hoc parallelisation methods and libraries, such as pthreads [12]. This code is usually very hard to read and understand, is tailored to a specific parallelisation, and optimised for a specific architecture, effectively preventing alternative (and possibly better) parallelisations and limiting portability and adaptivity of the code. An even bigger problem, from a software engineering perspective, is the maintainability of the legacy-parallel code: commonly, the programmer who wrote it is the only one who can understand and maintain the code. This is due to both complexity of low-level threading libraries and the need for custom-built data structures, synchronisation mechanisms, and sometimes even thread/task scheduling implemented in the code. The benefits of using parallel patterns lie in a clear separation between sequential and parallel parts of the code and a high-level description of the underlying parallelism, making the patterned applications much easier to maintain, change, and adapt to new architectures. In this paper, we deal with farms and pipelines. In a farm, a single computational worker is applied to a set of independent inputs. The parallelism arises from applying the worker to different input elements in parallel. In a parallel pipeline, a sequence of functions, \(f_1, f_2, ..., f_m\) are applied to a stream of independent inputs, \(x_1, ..., x_n\) where the output of \(f_i\) becomes the input to \(f_i+1\); the parallelism arises from executing \(f_i+1(f_i(...f_1(x_k)...))\) in parallel with \(f_i(f_i-1(...f_1(x_k+1)...))\). In this paper, we present a new methodology for the restoration of legacy-parallel code into an equivalent patterned form, through the application of a number of identified program transformations; the ultimate goal of which is to provide a semi-automatic way of converting legacy-parallel code into an equivalent patterned code, therefore increasing its maintainability, adaptivity, and portability whilst either improving or maintaining performance. The transformations presented in this paper are intended as manual transformations. We envisage incorporating implementations of these refactorings into a semi-automated refactoring tool as future work.


The input to the Software Restoration process is a legacy-parallel C/C++ program that is based on some low-level parallelism library, such as pthreads, and the output is a semantically-equivalent C/C++ program based on parallel patterns. In this way, we obtain well-structured code based on a higher level of parallel abstraction, which is significantly more maintainable and adaptive while still preserving good performance of the original, highly-tuned parallel version. In this paper, we will focus on the TBB library as our target code. It is important to note, however, that transforming the code into a patterned form also increases the portability of the code and gives a wider opportunity for parallelisation using different techniques, libraries and pattern approaches. In this paper, we target TBB as just one example of a typical and common pattern library but the patternisation step could easily be replaced with other equivalent and more general frameworks; e.g. the Generic Reusable Parallel Pattern Interface (GrPPI) [18], which allows multiple different pattern backends to be targetted from a single interface. Indeed, prior work on refactoring to introduce GrPPI [8] patterns could easily be deployed at this stage, further increasing portability of the patterned code.


The Software Restoration methodology consists of a number of steps, each applying a class of code transformations, some of which are driven by the pattern discovery code analysis. The whole process is depicted in Fig. 1. In the below description, we will focus on the code transformation steps. We will use a synthetic, but representative, parallel pipeline as a running example in order to demonstrate the transformation. Listing 1 presents aspects of the original parallel code with pthreads that are pertinent to this demonstration.


The POSIX thread libraries are a standards based thread API for C/C++.It allows one to spawn a new concurrent process flow. It is most effectiveon multi-processor or multi-core systems where the process flow can be scheduled to run on another processor thus gaining speed through parallel or distributed processing.Threads require less overhead than "forking" or spawning a new process because the system does not initialize a new system virtual memory space and environment forthe process. While most effective on a multiprocessor system, gains arealso found on uniprocessor systems which exploit latency in I/O and othersystem functions which may halt process execution. (One thread may executewhile another is waiting for I/O or some other system latency.)Parallel programming technologies such as MPI and PVM are used in a distributedcomputing environment while threads are limited to a single computer system.All threads within a process share the same address space.A thread is spawned by defining a function and its arguments which willbe processed in the thread.The purpose of using the POSIX thread library in your software is to execute software faster.


A condition variable is a variable of type pthread_cond_t and isused with the appropriate functions for waiting and later, process continuation.The condition variable mechanism allows threads to suspend execution and relinquish the processor until some condition is true.A condition variable must always be associated with a mutexto avoid a race condition created by one thread preparing to wait and another threadwhich may signal the condition before the first thread actually waits on itresulting in a deadlock.The thread will be perpetually waiting for a signal that is never sent.Any mutex can be used, there is no explicit link between the mutex and the condition variable.


2. Define a non-trivial systems programmingproject and implement it. The approval process requires a 2 pagestatement of the project, detailing the project scope, deliverable, andtools to be used. Delivery of the project involves a makefilethat builds the software and supporting libraries, etc., and will beopen to possible oral review with the instructor. If you choosethis option, you may tailor the labs to support the ongoing developmentof the project (where relevant) and with the approval of the instructoror TA. Your project should include multiple topics of the course,for instance, multiple threads, thread concurrency controls, sockets,IPC, etc. An example of a non-trivial programming project mightbe a multithreaded web server written in C that supports multiple simultaneousclients and basic HTML delivery and implements some subset of the HTTPprotocol. Plagarism, in any form, will result in an F for thecourse and reporting to the program director. 2ff7e9595c


1 view0 comments

Recent Posts

See All

Black facebook mod apk

Black Facebook Mod Apk: uma versão escura e personalizável do Facebook O Facebook é uma das plataformas de mídia social mais populares do...

Comentarios


bottom of page