In the second of our five part series on the future of mobile, Roland Pease explores the innovations and tricks that will keep us connected.
Smartphones: They are called that for a reason.
Over the last few years our mobile handsets have been transformed into portable computers packed with an ever increasing digital intelligence. Today, devices like the iPhone pack the same punch of one of the iconic Cray supercomputers that dazzled computer scientists in the 1980s.
But, according to Rich Howard, former head of Wireless Research at Bell Labs, it is wrong to assume that this raw computing power is used for smartphones most obvious functions.
“Most of what it does is make the communication work, not running some app you've pulled up,” he says.
In fact your smartphone is in a constant dialogue with the mobile phone network, working out which radio mast it should associate with and how best to transmit any information. When it is time for a call or a spot of browsing, the smartphone then has to encode and compress data as well as protecting it from degradation. At the same time, it constantly has to navigate the complex network of base stations and mast, ensuring there is a smooth transition as you move from one to another.
This is where a mobile’s computing power comes into play and it has allowed the amount of data handled by the networks to grow a by a factor of a million since the first phones were introduced in the 1980s.
The trouble is, there is more and more demand being put on today’s networks, with some predicting that the amount of data that they will have to handle will increase by a further factor of a thousand by 2020. Making phones even smarter will help. But the industry is now also exploring innovative new ways to shuttle this data around and keep up with our insatiable appetite for data.
The most basic approach is just to build more base stations for our mobiles to talk to.
The whole concept of cellular networks is based on dividing up cities and countries into “cells” served by a single base station that connects to a number of customers. As long as the transmissions from different masts don't overlap and interfere, everyone is happy. So halving the size of each cell and turning the power down is a quick way to double the number of wireless connections. But it comes at a cost. Each 3G tower costs around $50,000 . Add all of the cabling infrastructure to connect them to the network, and the total soon mounts up.
Nevertheless, mobile operators have increasingly been decreasing the size of cells and increasing the density of the network, adding nanocells, picocells and femtocells to public areas like malls and airports, buildings and even to single homes.
Perhaps the logical conclusion of this is the so-called attocell, proposed by Professor Harald Haas of Edinburgh University. His idea would allow individual rooms in a house to be further subdivided into individual cells. To do this, rather than using a wireless radio antenna, he proposes to use the humble light bulb as an antenna.
“Every light bulb in your house could form an individual access point,” Professor Haas enthuses.
“You could serve a laptop in one corner of a room with one bulb and have another bulb serving your tablet in another corner.”
What makes Professor Haas' proposal possible is the development of ultra-high-performance light emitting diodes (LEDs), which can flicker at tens of millions of cycles per second. Data can be encoded into those flickers – too fast for anyone to notice or be disturbed. And the data can be sent to the bulbs through a buildings standard electrical wiring.
“Our latest research shows that the data density can be 2,000 times higher than using radio links,” Professor Haas claims.
The idea capitalises on a growing trend in wireless communications – the convergence between mobile telephony and wi-fi internet access, once built for good reason to very different specifications. In fact, says Apurva Mody, who chairs an international committee developing standards for wide-area wireless internet access, mobile operators have saved themselves $50-99 billion a year by offloading a lot of data-intensive traffic to smartphones via the internet. That’s because every time you download a video to your smartphone through your home (or work) wi-fi, you've eased the load on the phone system.
“If all that data had come through the mobile networks,” he argues, “they would have had to build another 350,000 towers to meet demand”.
This kind of thing helps, of course. But part of the appeal of mobile – and one of the reasons it grew so quickly – was that it unshackled people from fixed connections. To continue its growth, mobile firms know that any solution to the bandwidth crunch needs to provide the same freedom. Hence why the industry is constantly scrutinising the radio spectrum for any new opportunities.
Currently, everything up to 300 GHz is spoken for - mobile phones are only permitted to use tiny portions, squeezed in between the bands allocated to broadcasters, radar, air traffic control, the military, satellite communications, even astronomers for whom keyholes must be left clear so they can tune into the universe. That's why every time a little more spectrum is released from some defunct use, operators are prepared to spend billions for a small slice of the new radio resource.
As a result, mobile firms are also looking at ways to use their existing capacity more efficiently.
And it is here that they may strike gold, according to Professor Lajos Hanzo, head of the department of telecommunications at the University of Southampton.
“The coding techniques used in smartphones are inching towards their theoretical limits,” he says. “But those are under idealized conditions. In the real world, their implementation rarely reaches the theoretical performance. “
In other words, if you're standing right next to a radio mast, with no other users in sight, and there's no radio interference, your link will be just about as strong as it theoretically could be; there would be no way of squeezing more bits through the spectrum.
But the world is not like that. We are far more likely to find ourselves far from the base station, wirelessly shouting to be heard among all the other transmissions, our signal broken up by reflections off metal buildings, or fractured by leafy trees. And it is these calls - straining from near the cell edge - that account for most of the shortfall from theoretical performance.
Various tricks are being considered to compensate for these deficiencies.
For example, Phil Pietraski and Bob DiFazio of Interdigital Communications, have proposed the idea of fuzzy cells. Cell edges, they reason, do not have to be rigidly defined. When we're at the edge of one cell, we are also within earshot of the next. It's a matter of convenience that our mobiles talk to one mast and not the other until we cross some imaginary line on the ground.
Fuzzy cells blur the boundary. Pietraski and DiFazio say it is possible to tweak the radio transmissions from towers in such a way that their footprints overlap at the edge, so that users approaching the mobile no-mans-land become connected to two masts, while other users further from the crossover don't suffer from additional interference.
Of course, coordinating the digital conversations becomes more complex. Under this system, instead of linking one handset and one base station, your mobile would hold two conversations with two base stations. Those would then talk to each other through the hard-wired “backhaul” of the network, to make sure no mistakes are made. If your “hap-” and “birth-” head off in one direction, while “-py” and “-day” go the other, you want to be sure they both get picked up and sent together to your aunt as a complete birthday greeting.
Nevertheless, tests by Interdigital show that the method greatly increases the throughput for users at the edge, without impacting on customers more deeply embedded within the conventional cells. The company is currently working with a series of partners to bring the technology to market.
Other ideas raise the complexity of communication even further. For example, one of the hottest topics in mobile communications now is so-called cognitive radio.
The idea stems from observations that - despite much of the radio spectrum being carefully allocated to different uses - much of it is idle for large periods of the day. Cognitive radio proposes a handset smart enough to sniff the radio environment, and opportunistically sneak into any bandwidth that was momentarily free, much as a pedestrian crossing a crowded precinct would seek out a path through the gaps in the crowd, negotiating a path step-by-step.
Anant Sahai of Berkeley University has been drawing up maps of spectrum use across the United States, to get an idea of the opportunities for cognitive radio.
“There's a lot of spectrum allocated to government that's underutilised. For a very good reason. If the band is allocated, for example, for emergencies, then most of the time there aren't emergencies, and it's not used. So obviously there's a lot of spectrum that could be used for mobile phones and other data services at those times.”
“The challenging question,” adds Gerald MaGuire of the Royal Institute of Technology in Stockholm, part of the original team who proposed the idea, “is whether that kind of spectrum can be shared dynamically, in such a way that if the services did need it, they can reclaim it very quickly.”
At the moment, spectrum allocations come tied to very tight technical restrictions making sure that one user's equipment doesn't cut across another user's allocation. With cognitive radio, Maguire warns, that licensing would have to extend to the software that does the decision making and retuning the frequencies – software that would be loaded onto millions of handsets, and might be hacked.
Trust will be a very important issue, agrees Sahai.
“In the old days, when you were you allocated a particular band, you were the only one using that band, and you only had to trust yourself and the entities you control. When you talk about cognitive radio, it becomes very interesting because there are multiple users, and they don't all belong to the same system, so you have to be able to build trust between them.”
This is one reason why full-blown cognitive radio, which might roam across the whole spectrum, may be a long way off.
‘Low hanging fruit’
But shades of cognitive radio are already visible in the campaign to free up TV “white spaces”, already high up the political agenda in the United States.
White spaces are the safety zones that separate TV broadcasters who are transmitting on the same frequency in different cities. But that means that across the world, huge amounts of spectrum is going to waste.
Apurva Mody finds the prospects thrilling, particularly because TV spectrum by its very nature is good for propagating over large distances, making it ideal for his aim of wide-area internet access.
“Five billion people across the world have no access to the internet. And the TV spectrum is ideally suited to connecting them.”
When he's not promoting the new wi-fi standard, Dr Mody chairs the Whitespaces Alliance, an international pressure group pressing governments to allow secondary users access to this unused spectrum.
TV spectrum is “the low hanging fruit” for cognitive radio, agrees Anant Sahai – and not only in the wide open spaces between cities. “Even in densely populated areas like city centres there is still a lot of white space - megahertz upon megahertz. What's more, the economics work out that you can deploy more infrastructure per square kilometre than you can in Wyoming.”
And because the spectrum is much more complicated in urban environment, cognitive radio will be essential as you go from street to street in finding which frequencies are free, and which you'll have to give up.
Companies are already gearing up to build the equipment that can hop into the white spaces. Among them is Neul, a UK company based in Cambridge, that has just announced what it says is the first integrated circuit capable of communicating over the TV white spaces.
True cognitive radio is still a long way off, says William Webb, Neul's Chief Technology Officer.
“Devices can't really detect transmissions with enough certainty to be sure what frequencies are free,” he explains. Instead, the system queries a geographic database indicating which frequencies are free where and when.
Designing the 4mm-wide chip was not easy, Webb admits. One of the key problems was to keep the signal from the chip phenomenally clean, ten times purer than that from 3G and 4G devices. This is because the devices will be operating in frequencies very close to commercial transmissions, and any corruption of the signal would result in interference with broadcasts that would be unacceptable.
That kind of consideration will be critical for any future implementation of full-blown cognitive radio. One compromise Neul had to make was to keep the data transmission rates low. High wave purity combined with high data rates would put an impossible demand on battery life, Webb says.
If cognitive radio can develop beyond those TV white spaces, then the available spectrum could grow enormously. But it would have knock on effects, not least for our smartphones.
“The first problem would be the antenna – one that could be reconfigured to operate across the whole wireless spectrum in an affordable and practical way for a small handheld – that would be impossible at the moment,” says Przemysław Pawełczak, software researcher at Delft University of Technology.
Then there would be the escalating complexity of the additional layers of programming the frequency-hopping protocols, and the battery-draining demands of the hardware.
“But the biggest problem is that people in the industry are sceptical, they are afraid of change. But I don't know of any idea that's more disruptive – I think it's the ultimate idea of what our radio devices should do. But we have a long way to go to implement it.”