On 18 January 2008, Microsoft and the Barcelona Supercomputing Center (BSC) announced the creation of the BSC – Microsoft Research Centre, which focuses on the way microprocessors and software for the mobile and desktop market segments will be designed and interact over the next 10 years and beyond.
Barcelona supercomputing center and Microsoft create joint research centre for parallel computing
The advent of many- and multi-core processor computing architectures will make it possible to deliver enormous computational power on a single chip, with profound implications for the way software is developed. Researchers from BSC and Microsoft are addressing the challenges and opportunities that massively parallel processing presents, and are specifically focusing on optimising the design and interaction of hardware and software architectures to take advantage of the new computing power. Futures recently asked Mateo Valero, Director of the BSC, about the work of the BSC-Microsoft Research Centre.
Mateo Valero, Director of the BSC
QUESTION: It is becoming increasingly impossible to build faster processors; and at the same time, more miniaturisation of processors brings heating problems that possibly we are not able to handle. Is the innovation of computing hardware and with it, computing itself, slowing down?
MATEO VALERO: We narrowly averted crashing onto a wall head-on: the powerdensity wall. It was a narrow miss, the thousands of engineers that worked on cancelled processors can testify on that. We can talk about other walls as well: the memory wall, the Instruction Level Parallelism (ILP) wall, etc. The way around those walls is multi-core architectures with many processors running in parallel. So now, rather than having one monolithic fast processor, you have 128 processors. Each of those processors is definitely much smaller in area, dissipates much less power and is somewhat slower when compared to the large monolithic way. However, it is the aggregate performance that matters. You break down a large problem into 128 smaller problems that can then be executed much faster in parallel – and then you have a winner. We will also see more heterogeneous architectures; and high performance processors will adopt a systems-on-chip approach with many specialised cores; those architectures will likely reorganise themselves based on application needs. Multi-core architectures open up new horizons. You can do neat tricks such as over-clocking the frequency of some cores even beyond the single core Thermal Design Power (TDP) limit; whereas if most cores are idle, you will still be substantially below the chip TDP. In this era, it becomes possible to run today's supercomputing applications on those hundreds of cores on chip; but we need further research into how the interconnection mechanisms influence this mapping. This is one of the topics we are working on in the Barcelona Supercomputing Center (BSC).
QUESTION: If getting a large number of processors to run in parallel is a way out, isn't it limited by how many we can get to run in parallel? And doesn't it become increasingly complicated for engineers to develop software that matches the requirements of those systems with many processors?
MATEO VALERO: The aim is to apply latest concepts from research in computer science, such as transactional memory. It also means that theoretical concepts from computer science and programming concepts will guide the design of processors. For a single application, we are limited by Amdahl's Law, i.e. by the inherently serial section of the application. More research is necessary at both the algorithm and application levels, since we always come up with penalisations of subproblems that we thought were inherently serial before. There is increased pressure on the software engineers to write code that efficiently utilises those new many-cores. We can say that if we leave things as they are, we may be facing a new wall: a software efficiency or performance wall. The key to scaling this wall is to overhaul the way we design processors. Processor design in the many-core era needs to be driven by software needs and requirements. Theoretical concepts from computer science and programming concepts will be the stars in this new era - for example, transactional memory. Hardware support for transactional memory enables software developers to write many-core code more efficiently. Sophisticated run-time environments will provide an abstraction layer between the computer hardware and infrastructure on the one hand and the application layer on the other hand. However, it is important to shield the complexities of the hardware from the programmers. Carefully thoughtout programming models could lead the way to making the life of the programmer easier. At BSC, we are working on programming models such as the Cell Superscalar which makes it easier to parallelise applications for one of the current multi-cores, the Cell Processor.
QUESTION: It seems that, by combining software concepts and hardware architecture with the software concepts guiding hardware design, software and computer science innovation is at the edge of an exciting era. Do you agree?
MATEO VALERO: Absolutely, the old Chinese curse ‘May you live in interesting times' is our motto. I believe that we are witnessing another revolution in computing, one that will fuse software and hardware researchers firmly together in holy multidisciplinary matrimony forever! We are certainly proud to be one of the contributors to this revolution.