While not strictly about Translation, this article on Slashdot points to researchers from Carnegie Mellon that are teaching software to learn a language. The implications of this should be obvious – the leap from this to Machine Translation being the first thought I had, and probably the most obvious link to this community.
From the CMU site, the main goal of the research is:
To develop a probabilistic, symbolic knowledge base that mirrors the content of the web. If successful, this will make text information on the web available in computer-understandable form, enabling much more sophisticated information retrieval, natural language understanding, and general problem solving.
Not to mention the fact that as non English languages become more prevalent on the web, the program should work just as well for them – and suddenly all you need is an interface between two different instances, and voila!
At the moment, one of the biggest challenges for large technology firms, and oft quoted as the soon to be most in demand job, is data mining. Statistical reporting – advanced analysis of incredibly large datasets.
Which describes exactly the Machine Translation problem – at the moment, the data sets are too large, the computers not fast enough. But progress is being made.