Could advances in technology, genetics and artificial intelligence lead to a world in which economic inequality turns into biological inequality? asks the historian and writer Yuval Noah Harari.
Inequality goes back at least 30,000 years.
Hunter-gatherers were more equal than subsequent societies.
They had very little property, and property is a pre-requisite for long-term inequality.
But even they had hierarchies.
In the 19th and 20th Centuries, however, something changed.
Equality became a dominant value in human culture, almost all over the world. Why?
It was partly down to the rise of new ideologies such as humanism, liberalism and socialism.
But it was also about technological and economic change - which was connected to those new ideologies, of course.
Suddenly the elite needed large numbers of healthy, educated people to serve as soldiers in the army and as workers in the factories.
Governments didn't educate and vaccinate to be nice.
They needed the masses to be useful.
But now that's changing again.
The best armies today require a small number of highly professional soldiers using very high-tech kit.
Factories, too, are increasingly automated.
This is one reason why we might - in the not-too-distant future - see the creation of the most unequal societies that have ever existed in human history.
And there are other reasons to fear such a future.
With rapid improvements in biotechnology and bioengineering, we may reach a point where, for the first time in history, economic inequality becomes biological inequality.
Until now, humans had control of the world outside them.
They could control the rivers, forests, animals and plants.
But they had very little control of the world inside them.
They had limited ability to manipulate and engineer their own bodies, brains and minds.
They couldn't cheat death.
That might not always be the case.
There are two main ways to upgrade humans.
Either you change something in their biological structure by changing their DNA.
Or, the more radical way, you combine organic and inorganic parts - perhaps directly connecting brains and computers.
The rich - through purchasing such biological enhancements - could become, literally, better than the rest; more intelligent, healthier and with far greater life-spans.
At that point, it will make sense to cede power to this "enhanced" class.
Think about it like this.
In the past, the nobility tried to convince the masses that they were superior to everyone else and so should hold power.
In the future I am describing, they really will be superior to the masses.
And because they will be better than us, it will make sense to cede power and decision-making to them.
We might also find that the rise of artificial intelligence - and not just automation - will mean that huge numbers of people, in all kinds of jobs, simply lose their economic usefulness.
The two processes together - human enhancement and the rise of AI - may result in the separation of humankind into a very small class of super-humans and a massive underclass of "useless" people.
Here's a concrete example.
Think about the transportation market.
You have thousands of lorry, taxi and bus drivers in the UK.
Each of them commands a small share of the transportation market, and they gain political power because of that.
They can unionise and if the government does something they don't like, they can go on strike and shut down the entire transportation system.
Now fast-forward 30 years.
All vehicles are self-driving.
One corporation controls the algorithm that controls the entire transport market.
All the economic power previously shared by thousands, and all their political power, is now in their hands of a single corporation.
Once you lose your economic importance, the state loses at least some of the incentive to invest in your health, education and welfare.
It's very dangerous to be redundant.
Your future depends on the goodwill of some small elite.
Maybe there is goodwill.
But in a time of crisis - like climate catastrophe - it would be very easy to toss you overboard.
Technology is not deterministic.
We can still do something about all this.
But I think we should be aware that what I'm describing is one possible future.
If we don't like this possibility, we need to act before it's too late.
There is one more possible step on the road to previously unimaginable inequality.
In the short-term, authority might shift to a small elite that owns and controls the master algorithms and the data that feeds them.
In the longer term, however, authority could shift completely from humans to the algorithms.
Once AI is smarter than us, all humanity could be made redundant.
What would happen after that?
We have absolutely no idea.
We literally can't imagine it.
How could we?
We are talking about an intelligence far greater than that which humanity currently possesses.