It is a fact universally acknowledged: the pharmaceutical industry needs a new drug development paradigm. Systems biology promises the ability to place drug targets in the overall physiological context - offering a possible route out of the destructive cycle of spiralling R&D costs and low productivity that is hitting the pharmaceutical industry – but it will take new tools and programming languages to underpin the vision of computerising biology.
A lifeline for systems biology
Electron micrograph of a swarmof cytotoxic T cells (white) attacking a tumour cell (orange)
‘Divide and Conquer' was the rallying cry of molecular biologists over the past five decades, as organisms were broken down into their component parts and each studied minutely. A wealth of knowledge, no doubt, but in the past ten years the ultimate – practical – value of such baroque detail has been called into question. Or rather, it has become evident that understanding discrete biological components is a staging post, and not the ultimate destination.
Nowhere is this better illustrated than in the pharmaceutical industry, where the marriage of combinatorial chemistry, high-throughput screening, and the torrent of new targets unleashed by sequencing the genome and proteome has fomented a huge increase in productivity in the discovery phase.
However, as Adriano Henney of AstraZeneca plc told delegates in a keynote address at the Future Challenges for Systems Biology conference in Tokyo in February, “This has not been matched by success in development, with a number of well-publicised failures recently of drugs in later stages of the development pipeline.”
The point was reinforced by Kenneth Kaitin, Director of the Tufts Center for the Study of Drug Development, who noted the US Food and Drug Administration approved just 16 novel drugs in 2007 – the same number as in 1983. Speaking at the American Association for the Advancement of Science meeting in Boston on 16 February, he said, “The industry is spending a lot of money to bring fewer and fewer drugs to market.” Thirty years ago when the Tufts Center first examined the cost of getting a new drug on the market the price tag was $54 million. Last year the (inflation adjusted) price was $1.3 billion.
Why has the increase in productivity in the discovery phase failed to translate to more approvals? At the margins are issues such as increased scrutiny and risk-aversion on the part of regulators. But the essence is a lack of understanding of novel targets beyond the isolated snapshot captured through the reductionist prism of molecular biology.
According to Henney, current evidence indicates the main reasons for failure of compounds directed at such novel targets are preclinical toxicity, lack of clinical efficacy and inadequate clinical safety. “In combination these factors account for up to 60 percent of development project failures,” he says.
What is missing from genomics and proteomics targets is an in-depth understanding of their place within the overall biological system, and the part they play in triggering, maintaining and progressing disease.
The march of molecular biology has reached the point where we can look inside cells and see how molecular components meet and greet each other. This molecular communication underpins all biological processes, and is analogous to communication within computer networks. The central challenge of systems biology is to take the individual components of molecular biology, capture the dynamics of their interactions and form them into end-to-end physiological pathways, tissues, organs, and ultimately, complete organisms.
The vision has been further accelerated by technological advances in the computer industry which make it possible to produce systems that are as large, complex, scalable and distributed as the practical application of systems biology will demand.
As yet, researchers are nibbling around the edges, but there are projects that give a flavour of what will be possible. For example, one of the major reasons why drugs fail in clinical trials is liver toxicity. Hepatosys, a German project started in 2004, aims to understand all the biochemical processes taking place in human liver cells and use this as the basis of a computer simulation of the liver that can be used for virtual screening.
3D simulation of a protein cascade, which acts as an ultrasensitive switch (using Network 3D)
Liver toxicity not only leads to drug candidates failing in development, it is also the most common reason for drugs to be withdrawn from the market. ALERT, a pan-European project funded by the European Union's Seventh Framework Programme, began work in February on a computerised system to detect adverse reactions. A key objective is to distinguish false alarms and to do this, ALERT will develop in silico models and simulations of the behaviour of drug and biological systems.
Meanwhile, at least a dozen efforts are under way worldwide to build virtual models of cells to predict both the beneficial and harmful side effects of a new drug.
But if the failure rate of drugs in development exposes one thing, it is that current generation simplified models are a liability. Drug candidates are failing both because the complexity of the pathway in which the target is involved is unknown, and because there is barely a drug in the formulary that only works on one target. On the one hand, this says that models and simulations need to be constantly reinforced with data from ‘wet' experiments to increase their predictive power.
On the other hand, it exposes the shortcomings of the tools that are available for developing models.
One approach that is starting to make a real difference is based on a simple, yet revolutionary observation: if biological systems have the same characteristics as large and distributed software systems, with processes running in parallel and data passed from one process to another, then we could use the same techniques that have been developed for a long time to do software engineering – programming languages and development tools – in order to re-engineer and model biological systems.
Based on this assumption researchers at Microsoft are turning their attention to the development of a new generation of computing tools designed specifically for systems biology. Heading this push is the University of Trento / Microsoft Research Centre for Computational and Systems Biology at the University of Trento in Italy, directed by Corrado Priami, and other colleagues at Microsoft Research in Cambridge, UK.
They are currently dusting down Pi-calculus, a parallel processing programming language, and adapting and enhancing it to meet one of the key requirements of systems biology – the ability to capture concurrency.
The variant Stochastic Pi-calculus developed by Priami can assign rates to interactions and has been used, for example, to simulate the well-known RTK MAPK pathway involved in gene expression, and in cell cycle control. A further variant, Ambient Calculus, developed by Luca Cardelli and Andrew Gordon at Microsoft Research in Cambridge, can be used to represent molecular localisation and compartmentalisation.
Researchers at Microsoft are turning their attention to the development of a new generation of computing tools designed specifically for systems biology.
Meanwhile, Jasmin Fisher, also of Microsoft Research Cambridge, is using a new way of modelling biological systems in which all processes and biological states, and transitions from and to them, are represented as an executable set of instructions. This so called ‘Executable Biology' technique also has the advantage of ensuring that models and predictions are testable and verifiable.
Building on his colleagues' work on Picalculus, Andrew Phillips at Microsoft Research Cambridge is developing the Stochastic Pi Machine (SPiM), a simulator that can be used to execute models of biological systems.
Phillips is applying SPiM with collaborators at Southampton University to model aspects of the huge information-processing machine that is the human immune system. The researchers want to model one of the key pathways, MHC Class I antigen presentation, in which the immune system signals that a cell is infected by a foreign body, such as a virus, and marks it out for destruction.
Eventually, says Phillips, it should be possible to build a library of independent modules that can be plugged in to create any biological system. Such an approach has the advantage of hiding complexity from the user and accommodating the fact some elements of a system may not yet be understood, allowing new information to be added without compromising what went before.
“You could take these modules and change them without affecting anything else in the system,” says Phillips. “You can test hypotheses by taking a module and plugging it in elsewhere.”
The irony, of course, is that molecular biology's divide and conquer approach also turns out to be the fastest and easiest route to reconstructing biological processes in silico.