Earthquake researchers have a problem. So do scientists trying to investigate the spread of deadly malaria. Whilst conservationists trying to get a handle on the state of illegal logging may have it worst of all.
What connects all of these is that when it comes to cracking some of their field’s biggest issues, traditional science methods are not fully up to the task.
The earthquake researchers would like to have monitoring systems that fully span high-risk areas, but having enough research-grade sensors that cover hundreds of miles of fault lines would cost them millions and millions of dollars. Swiss malaria researchers need to run enormous numbers of calculations to simulate the spread of malaria worldwide; valuable information for governments deciding where best to spend limited resources on life-saving interventions. But the computing power to run it is too expensive. Whilst in the deep, lush forests of the Congo Basin having enough well-trained people to monitor illegal tree felling and poaching is almost impossible.
But science is changing. In the age of the internet it is waking up the idea of people power: the combined forces of thousands of ordinary connected volunteers can help collect or crunch overwhelming masses of data.
“There’s so many people that can take part in this way, I think crowd-sourcing could almost be more important than the development of the Web,” says Ben Segal, who has worked on many volunteer computing projects at the at the European Particle Physics Laboratory, Cern – and who mentored, among others a young Tim Berners-Lee, who would go on to invent the World Wide Web.
“Citizen science” is not a new concept. So-called “volunteer computing” projects have expanded rapidly since the launch of Seti@Home in 1999, a program that still uses the power of millions of ordinary computers in screen-saver mode to help search for signs of intelligent life in the universe. Tapping into aspects of computer processing power, such as recharging modes, and making use of previously wasted “cycles”, desktops or laptops from people scattered worldwide can band together to mimic the number-crunching power of a supercomputer.
By the end of the last decade, several projects were using volunteer computing power for solving complex problems, ranging from cataloguing stars in the distant corners of the universe with Galaxy Zoo to predicting the complex three dimensional structures of protein structures with FoldIt. There is even a site (scistarter.com) devoted to the growing popularity of citizen science, where people can discover, take part in and fund research projects.
But those at the forefront of the field say that citizen science is now beginning to enter a new era. What has changed is a growing sense that participants can actively take part in projects, rather than passively allowing their idle computer to do the grunt work. “Their feeling is that science is too important to be left to scientists alone,” says Francois Grey from the Citizen Cyberscience Centre, a collaboration set up in 2009 between CERN, the University of Geneva, and the United Nations, with seed money from the Shuttleworth Foundation.
Tools for the trade
Grey’s Citizen Cyberscience Centre is one of the main operations pushing citizen science into unexplored territories. One reason Grey says this is becoming increasingly possible is that the technology barrier is dropping, so more and more sophisticated hardware can be placed in citizen’s hands.
One project the CCC supports – the “Quake Catcher Network”, or QCN as it is known – epitomizes the trend towards ever-smaller, more nimble devices, based upon the latest chips. A customized external motion-detecting device with a USB plug turns peoples’ ordinary desktops into automated earthquake detectors. Connect computers via the internet to a centralized system, or server, and you now have a wide-ranging system that maps an earthquake’s aftermath.
The network has been tested in the San Francisco Bay Area, and sensors were sent to New Zealand following the earthquake in September 2010 to learn more about the occurrence of “aftershocks” – which are almost as dangerous as the main event itself. In November last year, researchers at Taiwan’s Academia Sinica set up a server to monitor the quake-prone island that lies between the Eurasian and Philippine Sea Plates.
The sensors were developed by Elizabeth Cochran of the US Geological Survey and Jesse Lawrence of Stanford University an currently cost between $60 and $200 per sensor, a fraction of the cost of professional seismometers which can cost anything up to $100,000 apiece. The QCN devices only have a fraction of the sensitivity of research-grade seismometers, but what they lack in sensitivity, they more than make up for in sheer volume, says Lawrence. “With many more cheap sensors, instead of guessing where strong motions were felt by interpolating between research sensors, we should be able to know where strong motions were felt immediately, because we have (QCN) sensors there.”
And soon there could be an army of mobile “quake-catchers”, according to QCN’s Carl Christensen. Smartphones are ideal for the task, as they already have built-in motion-detectors, gyroscopes, accelerometers, and GPS signaling. By summer 2012 QCN expects to release an app that turns your Android smartphone into a pocket-sized earthquake sensor. Soon after, they hope to send 1,000 sensor-equipped phones to places where a fault-line has just slipped.
Some of the hardware being adopted in other citizen science projects also hail from unexpected origins. A major advance in scientific computing came from the development of superfast 3D Graphics Processing Units (GPUs) to run video games on Sony’s PlayStation 3 console. GPUs can do 10 times more than an ordinary chip. Consequently, Dave Anderson, founder of the open-source software platform BOINC, foresees volunteer computing at an “exascale” level – about 1,000,000,000,000,000,000 calculations per second – 100 times more powerful than today’s top supercomputers.
Gaming for gain
The scientific benefits of volunteer computing can be enormous, and consequently there are a host of efforts looking to capitalise on people’s unused processing power. For example, malariacontrol.net simulates the spread of the disease on computer – helping governments decide how to invest most effectively on, for instance, bednets versus vaccines.
In 2005, the Swiss Tropical and Public Health Institute’s 40-strong office computers struggled to run the enormous numbers of epidemiological simulations needed to get “real-world” results. After turning to volunteer computing, they now have the computing power of up to 15,000 desktops working simultaneously. Nicolas Maire, a researcher for the organization, estimates that this figure is the equivalent of a single desktop computer operating between 800 and 1,000 years. “Realistically, it would have been unfeasible to do in any other way,” he admits.
But it is not all serious work. Some of the most successful volunteer computing approaches have their roots in the world of gaming. Designers and software engineers are taking algorithms and game design principles and using them to solve longstanding scientific puzzles that require complex computer calculations. The programs they are creating encourage volunteers to donate spare processing power by turning it into an online game where people compete and collaborate with each other.
In FoldIt, for instance, players bend, pull and fold digital versions of protein molecules on their computer screens. Building the protein components needed for life involves a complex set of machinery in our cells translating information encoded in our genes into a sequence of amino acids, which are then wrapped and folded by into a three-dimensional form designed to carry out its required function. The amino-acid sequence dictates the shape the protein will eventually adopt, but even small proteins can potentially fold in a huge number of different ways, and so it is always a challenge for computers to figure out which of the many possible structures is the best one.
With around 240,000 registered players FoldIt is proving invaluable for researchers trying to solve the complex structural and folding patterns of proteins. Players can bend and pull proteins into their optimum shape, as long as they obey the rules of physics. The closer your attempts at protein origami adhere to those rules, the more points you get.
In recent months the game-playing volunteers have shown they can both actively predict protein structures and design new proteins. Last September, scientists reported the structure of a key enzyme that allows HIV to replicate, making it an obvious target for drugs. The precise protein structure had stumped them for almost 15 years, but the FoldIt community produced it within days. In January this year, gamers produced the first crowdsourced protein redesign – revving up the performance of an enzyme for one of the most important reactions organic chemists use to build compounds ranging from drugs to pesticides.
Another online program, Phylo, is advancing scientists’ knowledge of genetics by making a game out of DNA matching. If areas of genetic sequence are roughly similar between species, it suggests strongly that they could have an important function. Finding them has been beyond the scope of computer algorithms. But earlier this month, researchers published a study where gamers outsmarted the best computers – they made the best possible DNA sequence match between up to eight species at a time.
Despite these notable successes, CCC’s Grey is quick to point out that volunteer computing does not provide a universal solution. “There are certain problems you can crack with a supercomputer that would be hopeless with volunteer computing,” he says. Supercomputers are best suited for problems where thousands of processors must communicate with each other and swap data frequently during a calculation, according to Grey. Volunteer computing works best on easily shared problems that are divisible into digestible pieces that can be worked on in any order at any time.
That said, the successes highlight how much volunteers want more of an active role in citizen science. Volunteers do not just want to contribute data passively to a study, they want to learn and develop as a result of taking part. Some projects have volunteer-run support, forums and message boards. Scientists, volunteers, and the open-source community gather at hackfests to find volunteer programming and computing solutions to science problems.
Cern’s Segal says he has been struck by volunteer’s appetite for learning on projects like LHC@Home, and that volunteer computing seems to attract a lot of retired scientists and teachers, as well as people with a degree in science who wound up doing some other job. “We’ve had a number of cases of people who put in hours of unpaid labour to help newbies,” he says. “The advantage is that it can be self-policing, much like the original Wikipedia.”
But the deepest forests of the Congo Basin may provide a glimpse of where citizen science could be heading. An initiative aims to help pygmy tribes fight logging and poaching in this area by allowing them to document destruction and deforestation with games and interactive maps.
This initiative is part of the Extreme Citizen Science programme based at University College London, which officially launched in February this year. Headed by computer scientist and geographer Muki Haklay and anthropologist Jerome Lewis, the idea is to allow any community, regardless of their literacy to start, run, and analyse a scientific study – whether it is to advance interests, or to effect policy change. “We want to move citizen science from the educated to everyone,” says Haklay.
Working in a part of east London called Deptford, Haklay and his colleagues set up a community project where individuals monitored the noise pollution coming from a local scrapyard. By creating a bank of local data that showed the operation was violating noise limits, the UK Environment Agency revoked the scrapyard’s licence.
Lewis meanwhile is working with indigenous people in the Republic of Congo and Cameroon to develop data collection tools that can be used by non-literate people. In the 1990s, vast regions of forest in the Congo Basin were divided up and sold to multinational companies to mine resources. But by the mid-2000s, many of these companies wanted FSC (Forest Stewardship Council) certification to signify products that are from responsibly harvested and verified sources, and they turned to people like Lewis for help.
So Lewis devised ways for monitoring what the indigenous people wanted to preserve, for instance, trees from which they harvest a particularly delicious and tradable species of caterpillar. People are equipped with touchscreen devices with icons for various options like “valuable tree” that they can select and tag with GPS coordinates. “What is good is that these maps will do the talking for communities,” says Lewis.
However, there are still several fundamental issues to resolve with these types of projects. One is how you train people adequately for the task. Another is how scientists can ensure that the methods used and quality of data are robust enough when people have a clear stake in the outcome.
Making data-collection protocols, instruments and analysis as robust as possible can overcome malicious or accidental bias, says Haklay. “The importance of smartphones as scientific instruments cannot be underestimated in this context: GPS provides accurate time and good location; phone pictures provide location and timestamps (which are unlikely to be tampered by non-literate and technically challenged users), which provide you with evidence; and when they communicate with air pollution sensors or a microphone about noise level, for instance, you have a time stamped observation from an instrument.”
Another concern is that there are simply not enough potential volunteers available. CCC’s Grey points out that with only 40% of the population of China hooked up to the internet so far, that means there’s potentially another half-billion about to join. Already a number of citizen science projects are trying to harness the large number of potential volunteers in China, and the first project devised by Chinese scientists launched in 2010. Researchers at Beijing’s Tsinghua University, with the support of IBM’s World Community Grid, set up Computing for Clean Water to use computing power from more than 50,000 volunteers to virtually design better low-cost, low-pressure water filters, which will hopefully make water purification cheaper and more accessible.
And this interest will only increase, says Grey. “I did the maths, and every second another Chinese person joins the Internet for the first time – and all of them could potentially donate computing time on their laptop or tablet or phone,” he says. “So, when someone says there are just not enough possible volunteer computers out there, I say that at one new Chinese internet user per second, don’t worry.”