Start here

Why is Machine Translation Soooooo… Hard?

Machine translation is the science of taking some human language content, usually in text format, although speech to speech systems are being developed, and using a computer to digest the content and create output that is true to the fidelity of the original, in another language. Simply put, as an example, English in, German out. So why is that so unbelievably hard to do?

Going back to the history of machine translation (hereafter called MT), you have scientists that believed it was originally a word for word lookup and replacement problem. It is easy for even the uninitiated to see quickly that this solution would break down quickly. For one thing, there was no word rearrangement. How would a Spanish speaker be able to read an English source sentence translated into Spanish if all of the adjectives were before the noun instead of after the noun? That’s only a tiny fraction of the problems, but easy to understand. So, this was obviously a case of needing rules for word rearrangement, which then required that the MT engine figure out the part of speech of each word in the source. How could you move the adjectives behind the noun, if you didn’t know which words were the adjectives and the noun they were acting on. So “Direct” systems were born where the engine would start rearranging the words based on the source, and then flip the words to the target language. Pretty soon, computers got a whole lot more powerful, and with more computer power came even more rules, and then Direct systems morphed into “Transfer” systems, where you could actually break down the sections of the engine into analysis, transfer, synthesis. The quality improved significantly, but that is sort of like saying it was less bad. But investment dollars were plenty, because the holy grail of a near perfect system would literally change the world. In the 80s, scientists started playing with “hidden markov models” (don’t worry, I’m not going to explain what they are) and statistical MT. Basically, what are the odds, that based on the words around a word, that, based on all of the corpus of words that the engine has already read, that the answer is this sense of the word, and not this other sense of the word. Of course, more data is good data in that world, and actually statistical MT got the world on board with systems like Google Translate and other free systems.

I’m going to go out on a limb here and declare my belief that Google Translate has failed with its current technology. If more data is the answer, Google has been scanning the web for existing translations to digest since 2007, and it still is just “less bad”.

So why did all of these millions of person hours that have been put into statistical MT failed? Have you ever heard of the expression “if you are a hammer, everything looks like a nail”? They are using the wrong tools, and, to some degree, have been for decades. Think about it. Where does languages sit at the university level? Liberal Arts. Writing is an act of art, not science. Translating art to art (translating Moby Dick from English to Spanish) is NOT a scientific endeavor, it is an artistic endeavor. Good translators are artists. They capture the mood, the meaning, and even the essence of the source, and try to replicate that experience for the reader of the translation. They are literally painting a verbal picture for the author, in another language.

Ok, so how do we get closer to the artist. For one thing, we need to understand the meaning of the source. We need to understand things like intensity, ambiguity, culture, and many more things that are very hard to quantify scientifically. At LinguaSys, with decades of experience in this space by the founders, we understood all of this, and that shows in the path we have been taking for over a decade of development of our MT technology. Before we begin to think about translating source content, we deeply analyze it both semantically and syntactically. By no means are we done, but what we have been able to produce for our clients, to date, is a system that retains fidelity, sometimes at the expense of good syntax. More importantly, we understand where to use MT and where not to use MT, and where most MT fails, is when it is used wrong. We have proven that MT works great when there is a given domain that we can concentrate on, such as, for one client, financial services. That is a limited domain, and knowing that the source is going to belong to that domain means that we have strong clues in how to disambiguate words that have multiple meanings. Based on the state of the art today, and we are, I believe, the state of the art today, MT to the masses (Bing, Google, etc) is one big FAIL. Use it at your own risk. Your mileage may vary.

Where new invention in AI takes us, must concentrate more on semantics and less on statistics. Yes, statistics helps a lot and can play a big role in being another voting part of the engine in making a disambiguation decision, but for MT, or Speech Recognition, or any of the other imperfect technologies that are “less bad” on a daily basis, the secret sauce to getting them “good” is semantics.

Natural Language Understanding Is Now Available to Millions of Developers

Create Your Own Applications in Days, Not Months, With the Groundbreaking LinguaSys Server That Allows Anyone With Basic Programming Skills to Create Natural Language Conversational Interfaces in Over 20 Languages

BOCA RATON, FL., February 11, 2015 – Millions of developers with only basic programming skills will now be able to create their own Artificial Intelligence (AI) Natural Language Understanding Interfaces (NLUI), with the announcement of today’s release of LinguaSys Natural Language User Interface Server. The new, multitenant, NLUI Server runs on Microsoft Azure’s open cloud platform, so users can quickly build, deploy and manage applications across a global network.

Anyone who can write XML scripts can create complex NLU applications such as hotel reservations, car rentals, an NLU interface to a favorite Customer Relation Management (CRM) system or build their own Siri-like application by leveraging LinguaSys’ extensive semantic network of 20+ languages, and their NLU engine, which moves most of the AI from the developer to the engine.

“We’re revolutionizing the byzantine Natural Language Understanding marketplace by commoditizing the ability to create AI interfaces in hours or days, not months and years,” said Brian Garr, CEO of LinguaSys. “NLUI Server allows you to write an AI interface once, and have it accept input in all LinguaSys supported languages.”

This new NLU capability is available on the LinguaSys GlobalNLP™ portal at https://nlp.linguasys.com/ .  Trial subscriptions are free, and it comes with an editor, validator and run time.

“While our competitors are creating highly proprietary systems that only they can control, at exorbitant pricing levels, we are making NLU a commodity to enhance the customer experience for large and small enterprises around the world,” said Garr. “This is not smoke and mirrors. Try it yourself, today.”

LinguaSys’ NLUI Server is the latest addition to its suite of multilingual software offerings included in GlobalNLP™, a cloud-based API publishing platform for developers, announced October 21. GlobalNLP™ enables the global software developer population to build Natural Language Processing applications with their own business logic and extract meaning from unstructured or conversational text across languages. https://nlp.linguasys.com/   GlobalNLP™ is also available on the Oracle Cloud Marketplace at https://cloud.oracle.com/marketplace/listing/2135165/LinguaSys/GlobalNLP?_afrLoop=226145371855068&_afrWindowMode=0&_afrWindowId=9v0y8iryo_6 ; the IBM Cloud Marketplace at https://marketplace.ibmcloud.com/apps/1897#!overview ; and soon at the Microsoft Cloud Marketplace.

###

About LinguaSys, Inc.

LinguaSys solves human language challenges in Big Data and social media for blue chip clients around the world. Its natural language processing software provides real time multilingual text analysis, sentiment analytics and fast, cost-effective natural language user interfaces. The solutions are powered by LinguaSys’ Carabao Linguistic Virtual Machine™, a proprietary interlingual technology, to deliver faster and more accurate results. Designed to be easily customized by clients, the solutions can be used via SaaS or behind the firewall. Headquartered in Boca Raton, FL, LinguaSys is an IBM Business Partner. www.linguasys.com   @LinguaSys   Join us on LinkedIn http://linkd.in/1rC1qzi

What Microsoft and Apple may never figure out

So let’s say that you need to build something really cool, like a virtual assistant that understands human language. Now let’s say that you take a bunch of MIT mathematics guys, and you tell them to figure it out. How likely is it, do you venture, that these guys will ever pay attention to the actual meanings of the words being evaluated. Right. Thus we have “state of the art” virtual assistants based on prediction algorithms and models. So why doesn’t Apple and Microsoft care about the meaning of words? Probably because hundreds, if not thousands of developer jobs depend upon statistical methods. This is why Microsoft and Apple will never get to the next stage of NLU. The theory of feeding millions of words into a hidden markov model, and “the more data, the better” has hit a wall, and no one (mathematicians) wants to admit it. Let’s face it, if “more data is better”, than why does Google Translate put out such awful translations most of the time? They’ve been trolling the web adding more data for a decade.

The missing link in all of this? Semantics. Right, the idea of understanding the meaning of all of the words in an utterance. Just because the odds are in favor of a word (probability), obviously means that there is an error rate, because the odds are almost never 100%. Forget about the Rest of the World, English is incredibly ambiguous. Take the word “tank”. Inside the LinguaSys lexicons in English, “tank” has nine possible meanings. I tanked the test. Fill the gas tank. The tank rolled into battle….etc. The truth, as I see it, and experience it, is that semantics beats statistics hands down. In fact, we, at LinguaSys, have beaten statistics hands down. We recently competed against two of the very largest providers of natural language understanding at a major auto manufacturer. All three vendors had to complete the same task. Guess what, we won! And, although I don’t have the actual metrics for this, I imagine that we did it in about a 10th of the time and cost. Because we deal in semantics, if I need to include the option to bring pets in my hotel reservation virtual assistant, I don’t have to build an embedded grammar listing all of the kinds of pets there could be. I already know, because of my semantic tree, all of the “children” of pets. One line of code vs hours of grammar building. Now expand this tree to 19 other languages, and you start to see the power of semantics.

We live in an era of statistics, and it certainly has its place, but the Microsoft and Apples of the world will, I fear, never fulfill their vision simply because embracing semantics would cost too many developers their jobs. When you are a hammer, everything looks like a nail.

New Natural Language Processing API Portal

By Adrian Bridgewater, Dr.Dobb’s, October 22, 2014

http://www.drdobbs.com/tools/new-natural-language-processing-api-port/240169200

Human language big data company LinguaSys has created a new API portal called GlobalNLP to reach what it calls “the flourishing global developer population” building Natural Language Processing (NLP) applications with their own business logic.

GlobalNLP understands and extracts meaning from unstructured or conversational text to comprehend the meaning of textual human dialog across over 20 languages.

LinguaSys works across the “more challenging” Asian and Middle Eastern languages with high-quality semantics. There are customizable language models, attribute tags, concept tagging, domain detection, full semantic network, hyponyms, hypernyms, synonyms, keyword extraction, language detection, lemmatization, morphology, parsing, part of speech tagging, relation extraction, sentiment analysis, translation, transliteration, crosslingual retrieval, anaphora resolution, and natural language interfaces.

The company provides services to developers via RESTful cloud-based API connectivity with up to 20 API calls a minute, and up to 500 API calls a month — free — to test.

“LinguaSys is also making Story Mapper available on GlobalNLP. [This] provides insights to media intelligence companies, analyzing large volumes of unstructured text in native languages, then categorizing and summarizing the data,” said Brian Garr, CEO of LinguaSys.

Story Mapper searches big data, social media, news feeds, and other digital text for content, sentiment, names, quotations and relationships, extracting facts, topics, events, assessing tone, and determining prominence, relevance, and the dependence of entities in multiple original languages.

Follow

Get every new post delivered to your Inbox.