Loading
12 new tech terms you need to understand the future
Share on Linkedin
Mirror (Credit: Getty Images)
From crowdturfing to brainjacking, BBC Future Now explores the unusual and intriguing vocabulary emerging from technology advances this year.
(Credit: MIT)

(Credit: MIT)

AUTONOMOUS ARCHITECTURE: Robots that print entire buildings, like this technology developed at MIT, promise to change the construction industry's reliance on human labour.

Robots could build your next house

Like an ink-jet printer, machines can build walls in layers in just a few hours.

Li-fi (Credit: Getty Images)

Li-fi (Credit: Getty Images)

Most flyers will be familiar with the need to put their devices into “airplane mode”. But light-based technology on aircraft could allow people to remain online even during take-off and landing.

Overhead LED bulbs that turn off and on millions of times every second – so fast it is undetectable to the human eye – can send data 100 times faster than traditional wi-fi.

Airbus has been working on installing this li-fi technology on its aircraft as it avoids interference with electronic equipment that has led to restrictions on wi-fi and also offers enchanced security.

For now, connecting to li-fi requires a special dongle that attaches to laptops. But li-fi capability appears in the code of Apple’s latest iOS operating system, suggesting the technology could become hardwired into devices in the future.

(Credit: Getty Images)

(Credit: Getty Images)

Medical implants with wireless functionality are becoming increasingly common. They can be programmed, controlled and recharged without the need for surgery or wires.

While more convenient, these wireless medical devices are also far more vulnerable to hacking. Former US Vice President Dick Cheney, for example, had the wireless function on his pacemaker turned off in case foreign powers tried to use it to assassinate him.

Now cybersecurity experts have warned that medical device hacking could take an even more disturbing twist as patients begin to receive implants in their brains. Known as Deep Brain Stimulation, these devices deliver electrical pulses to neurons in the brain on or off. They are already being used to treat conditions such as Parkinson's Disease, but are being trialed in patients suffering from Tourettes Syndrome, chronic pain, depression, anorexia, mood disorders, and obsessive compulsive disorder.

Researchers at the University of Oxford warn wireless programming used to control these brain implants could be hijacked by hackers to induce pain, tremors or even alter their behaviour.

Platooning trucks (Credit: Volvo)

Platooning trucks (Credit: Volvo)

PLATOONING: Semi-autonomous trucks are now appearing on roads. Sensors enable tight-knit convoys, led by a human driver, which reduce congestion and save fuel by 10%.

(Credit: Getty Images)

(Credit: Getty Images)

Machine learning algorithms calculate our credit scores, assess insurance claims and policy applications, make music recommendations, offer investment advice and help to diagnose cancer. Artificial intelligence is also being used to spot financial fraud, and in the United States, it is applied in courts to help inform sentencing and bail decisions.

But while these systems can appear to give accurate results during testing, they can be flawed – they are only as good as the data they are trained on and can pick up bias, while the code itself can also carry the unintentional prejudices of those who wrote it.

"If there is a little bit of bias introduced at the early stage due to the data being used, that can be amplified with each generation of the code being improved,” explains Katherine Fletcher, coordinator of cyber-security research at the University of Oxford.

(Credit: Getty Images)

(Credit: Getty Images)

When buying a product or making a booking, most of us turn to online reviews written by other customers to help us make a decision. But some unscrupulous firms pay people to write positive reviews to tip the scales in their favour. Others seek to tarnish the reputation of competitors by paying for scores of negative comments.

This is known as “crowdturfing” – a portmanteau of “crowdsourcing”, where large numbers of people are recruited to support an effort, and “astroturfing”, a term used to indicate fake grassroots support.

The cost of hiring people has largely limited the spread of fake reviews. But researchers warn artificial intelligence will change that.

Scientists at University of Chicago have shown it is possible to train an AI neural network to generate and post convincing reviews on sites like Yelp and Tripadvisor. Automated crowdturfing algorithms could produce so many fake reviews it would quickly become  impossible to know what to trust, warns computer scientist Ben Zhao, one of the researchers behind the study.

"We've already seen the ability of using fake news as a way of influencing public opinion and elections," says Zhao. “If attackers can make use of AI to generate large-scale reviews that look identical to real content, it will be very difficult to distinguish reality."

(Credit: Getty Images)

(Credit: Getty Images)

A number of major companies have seen sensitive customer information and commercial data leak into public following attacks by hackers. The WannaCry cyber-attack also crippled many systems within the UK's NHS last year, leading hospitals to cancel elective operations.

But as the battle between hackers and those defending sensitive computer networks escalates, experts predict machine learning will be used to find new vulnerabilities.

Attackers are also expected to make use of sophisticated AI chatbots in attempts to extract sensitive information, such as bank details, from internet users.

But artificial intelligence will also help improve cyber-security too by helping them identify problems before they can be exploited, leading to a machine learning “arms race”, according to anti-virus firm McAfee.

(Credit: Getty Images)

(Credit: Getty Images)

Computer vision is already creeping into vehicles on our roads – many use cameras and object recognition software to control self-parking technology. As cars become more autonomous, they will rely more and more on computer vision to navigate.

While computers have proven to be adept at recognising objects, recent research has shown they can also be tricked in some quite worrying ways. One study by the University of Washington showed that strategically placing a couple of stickers or graffiti on stop signs can confuse driverless car algorithms so they no longer recognise it.

Another recent study showed that it is even possible to induce "hallucinations" in computer vision systems by changing the texture of an object. To human eyes, the shape and colour might look completely normal, but to the machine it could look like something totally different.

Researchers at the Massachusetts Institute of Technology fooled a common AI computer vision system created by Google into thinking a 3D-printed model of a turtle was a rifle.

Andrew Ilyas, who led the research, warns an attacker could use a similar approach to change the texture of a solid object so a self-driving car sees it as a section of ordinary road. "There's certainly a risk," he says. "We haven't tried attacking self driving cars ourselves, yet."

Mirror (Credit: Getty Images)

Mirror (Credit: Getty Images)

THE ‘INTERNET OF EARS’: Like Alexa or Google Home, our mirrors, cars, showers, microwave ovens and even toilets could also soon be controlled by voice command, according to tech analysts J Walter Thompson.

mHealth (Credit: Getty Images)

mHealth (Credit: Getty Images)

Wearable technology like FitBits and the Apple Watch collect large amounts of data about your wellbeing and activity. But they promise more than simply aiding fitness.

The information that wearables collect can also start to feed more and more into patient records for doctors, known as ‘mobile health’. Trips to your doctor will no longer require vague descriptions of symptoms which have mysteriously disappeared in the 24 hours since you booked your appointment – he or she could soon access your data from wearable devices to aid diagnosis and improve monitoring of illnesses.

Barrier at sea (Credit: Ocean Cleanup Foundation)

Barrier at sea (Credit: Ocean Cleanup Foundation)

PLASTIC OCEAN BARRIERS: The Ocean Cleanup Foundation is extracting plastic pollution in the Pacific this year using a floating barrier to drag it towards the shore for recycling.

Charger (Credit: Getty Images)

Charger (Credit: Getty Images)

Although wireless charging has been around for years in electric toothbrushes, the number of devices able to recharge their batteries untethered is going to explode this year (although hopefully not literally).

Apple is due to release its much delayed AirPower charging pad for use with its latest iPhone and Watch models. Like many other manufactures including Samsung, Apple has adopted the Qi wireless charging standard, which uses an electromagnetic field to induce a current in a coil on the receiving device’s battery. When placed on top of the transmitter, the charging starts.

Expect to see more wireless Qi charging pads springing up all over the place, from airports and coffee shops to cars. McDonald’s, for example, is installing wireless Qi charging stations in 1,000 locations around the UK in 2018 while car manufacturers including BMW, Ford, Toyota and Volkswagen have said they will include wireless charging pads in their new cars.

Around the BBC