There are times when I feel as if I’m truly living in the future. It happened to me most recently when, browsing an online newspaper archive, I came across a 1954 article in the Los Angeles Times about the dawning age of language translation by computers. The short Associated Press article about an IBM computer, trumpeted as the first computer capable of translation between different languages, ended with an example of its skills.
The reporter, in suitably sensational language, explained “the brain” was fed a sentence in Russian to translate into English. “Lights flash, there is a subdued clinking and clanking, and in 10 seconds you’ve got the translation,” it said.
Curious to see if it had got it right, I copied the Russian text from the story and opened a new tab in my browser. A quick copy and paste into Google Translate and I quickly confirmed the translation - no need for a super computer or access to the laboratories of a computing giant. And, had the original quote been in one of the 63 languages supported by Google, the process would have been just as quick.
Welcome to the future.
My discovery got me hunting for the origins of this technology that we now take for granted. And, as I discovered, our interest in multilingual machines and trouble-free translation goes back much further than the 1950s. In fact, you have to go back to 1629, when French philosopher and mathematician Rene Descartes proposed a series of universal symbols that any language could be converted into. His idea was seemingly never capitalised on. In 1933 patents were filed independently in both France and Russia which used different mechanical means of translating languages through paper tape. But, as is so often the case, war was the catalyst for serious effort in the field.
Fear and loathing
Electromechanical cipher machines used during WWII, such as the German Enigma, inspired scientists after the war to dive head first into the bold new era of computer translation machines. One of its early proponents was American scientist Warren Weaver, director of the Natural Sciences Division of the Rockefeller Foundation. In 1946 he read a report by English physicist Andrew D. Booth which inspired him to believe that machine translation was just around the corner. In the following years, his colleagues encouraged him to elaborate on his ideas, resulting in his 1949 memorandum “Translation”. The document, said to be the single most influential publication in the early days of machine translation, outlined a series of ambitious goals for the field, despite appearing at a time when few people knew what computers may be capable of.
The note, which recognised the need for a “tremendous amount of work in the logical structures of languages before one would be ready for any mechanization”, was circulated to about 200 of his friends (many of whom were US government policymakers) and is said to have inspired virtually all serious research into the subject in the 1950s.
But Weaver’s memo was not the only driver for this burgeoning field. What really kick-started research was Cold War fear and the US desire to easily read and translate Russian technical papers.
In the mid-1950s roughly 50% of scientific papers published around the world were in English. The average paper cost about $6 to translate (around $50, adjusted for inflation) and translation of highly technical papers required that the human translator be intimately familiar with the material. The enormous amount of time and high cost of translating those papers presented a problem for Americans obsessed with being on the forefront of new technological developments - and beating the Russians.