Abstract: There is a challenge to grow the community of developers who can successfully exploit multi-core because application developers, who are experts in their domains, are not necessarily parallel or performance experts. Application providers have limited resources to develop and maintain multiple target-specific variants of their source code. Parallel programming is just too error-prone and time-consuming for wide scale adoption

Intel Concurrent Collections is a simple yet powerful parallel programming model which separates the expression of all potential parallelism in an application both from the serial computations and also from the target-specific details such as mapping and scheduling needed to run an application on a particular architecture. Intel Concurrent Collections raises the level of the programming model just enough to avoid typical parallelization issues and requires domain experts to define a semantically correct algorithm only. Domain experts do not need to worry about race conditions, dead locks, architecture details, scheduling, etc.

An Intel Concurrent Collections program consists of an abstract parallel algorithm definition and of high level primitive operations and data structures implemented in a serial language. All target-specific issues are solved by the runtime system. This is a very general approach of defining a parallel algorithm and it makes easy expressing any kind of parallelism. Intel Concurrent Collections applications are not target-specific and do not have to be rewritten when they are ported to another platform.

Intel Concurrent Collections for C++ is implemented as a C++ library based on Intel TBB library and published on whatif.intel.com site (http://softwarecommunity.intel.com/articles/eng/3862.htm). Implementation includes a translator from textual Intel Concurrent Collections abstract parallel algorithm definition to C++ classes declaration.

Intel Concurrent Collections programming model makes development of parallel applications available to domain experts who are not experts of parallel programming.