IBM bets on data-centric computing

  • 4 October 2011
  • From the section Business
Dr Jai Menon
Dr Jai Menon says self-learning computers are the future

Each week we ask high-profile technology decision-makers three questions.

This week it is Dr Jai Menon, the chief technology officer and vice-president for technical strategy for IBM's Systems and Technology Group.

He holds 52 patents and is arguably most famous for his contribution to the Raid storage technology. Computing giant IBM has more than 426,000 employees, generating an annual turnover of just under $100bn (£64.6bn) and profits of $14.8bn.

What's your biggest technology problem right now?

There are always multiple problems, but one problem that we are focused on is providing our customers with IT solutions that are flexible to their needs, but easily consumable.

Our customers have many different kinds of workloads they have to run, for example transaction-based systems that have to serve thousands or millions of users at the same time, 24/7.

Or analytics systems with fewer users that require deep complex computation. The challenge is how do you satisfy all these different kinds of tasks?

The are two different approaches: You can standardise it all on one kind of computer, and use that for all their business tasks. But that doesn't really work: it's like saying 'buy just one type of car', and hope it meets the needs of a small family, and doubles up as a pick-up truck, a big van or an MPV [people carrier].

So the other approach is to realise that you have lots of different types of workload, and you buy systems that are optimised for these tasks. That's clearly preferred to the first approach. The challenge over time is that with lots of different workloads, you end up with many kinds of computers, and then there's the challenge to make that consumable.

We are working on a technical approach that will create a system that has all the pieces that make up a computer system. You build this system with different kinds of processors, and there are memory and storage and networking elements, and then you have very sophisticated software that comes with the system. And the software is able to construct the kind of computer you need.

So if you need a lot of computing power, medium-sized storage and not a lot of memory, that's what the system provides. And once the task is running, and you need more memory or computing power, then the system will make that choice for you.

And when your workload goes away, you simply deconstruct the system.

This is not just virtualisation, where you have one kind of standardised computer, with a standardised processor and a certain amount of storage and memory.

You need to be able to assign more than what a single computer can do.

This is very much customer driven. What our customers are telling us is: 'Come up with newer better computers, that take up less floorspace and are faster.'

People have amazing amount of workload, and require lots of different virtualisation environments, but they also have too many different systems.

So I've got to let customers reuse their existing assets, skills and software.

The software is key - it's a universal resource manager.

What's the next big tech thing in your industry?

IBM's Watson computer
IBM's Watson computer was a proof of concept, says Dr Menon

The next big thing in our industry are new kinds of computers. I call them data-centric computers, because right now our computers are very processing-centric computers.

These new computers can extract and find information in data that can aid human cognition. When we created [supercomputer] Watson, it combined hardware and deep analysis software that we designed to work together.

We are moving away from computers that compute, to computers that can extract information from the huge amounts of unstructured data - because every two days we generate more data than all data from the dawn of civilisation until 2003.

Watson was just an example to prove the point. There are very interesting business problems out there, and rather than having to be programmed these computers learn as they go along.

They are data-centric rather than compute-centric.

For example, they could work as a physician's assistant, providing all the knowledge, the data about the patient itself, manage the doctor's notes.

Right now, all we do is Google a medical problem, and we get back 20 documents, and we have to go through them and rate them and find the answer.

In the future, the computer will give you an answer with a probability to go with that, and that's so much better than what we do today.

That to me is the next big technology thing. And it also applies to government. Computers could help governments find answers to tax issues, zoning laws, financial issues.

From a technology point of view, we still need a few things that to support this - more memory in the system, and solid state memory and storage, and obviously the deep software.

This is not Skynet [as described in the Terminator movies]. People always worry about new technology. When pocket calculators were introduced, people said we would forget to multiply; when computers came they said we would forget how to spell.

In reality all these computers are assistants, and they save us time so that we can focus on doing the things that only humans can do.

Pilots, for example, have always had things to helped them fly a plane. But at the end of the day I would not fly without a real human on-board.

What's the biggest technology mistake you've ever made - either at work or in your own life?

This is probably an unusual kind of answer, but the timing of innovation is really important. My experience is, as innovators, we are always frustrated if we are too late.

We say: "I had the idea first, why did product development not move fast enough?" But my biggest mistake was in pushing an innovation too early to market, and I've learned from that.

What I've learned is that you really have to prepare the market. You have to shape the market, prepare your customers, create a standard, get enough people to buy into the standard.

And if you introduce your product too early and you haven't done that, then your product doesn't do very well. You just create a vicious circle, because you don't have the profits from the product to recycle and improve and innovate the product.

And then, once the market is ready and prepared, then you will be hesitant because you tried this once before and it didn't work. Then it gets very difficult to reenter the market.

For example in the storage space, we developed this IP [internet protocol] driven storage attached to the network. We shipped it in 2001, and it didn't do so well in the market.

This is now a $3bn market - 10 years later it's a great story, but by pushing it too soon, maybe five years too soon, it soured our executives as to whether this really was a good idea.

And then it is hard to catch up later.

Timing is everything. You can be wrong on both sides, too early and too late, and both are bad.

Related Internet links

The BBC is not responsible for the content of external Internet sites