On disappearing scripts

Medium is a new online forum or format that I’ve been seeing more and more writing of note on, Quinn Norton’s essay collection is an example of some of the most interesting online writing at the moment. Smart, savvy, independent, thoughtful, nuanced.

This week I stumbled across another piece of note for language nerds, about the potential demise of the Urdu script nastaliq - one of the Persian scripts of note, still found in parts of Afghanistan, Western China, Pakistan and India:

…Urdu, a South Asian language spoken by anywhere between 100 — 125 million people in Pakistan and India, and one of Pakistan’s two official languages. Urdu is traditionally written in a Perso-Arabic script callednastaliq, a flowy and ornate and hanging script. But when rendered on the web and on smartphones and the entire gamut of digital devices at our disposal, Urdu is getting depicted in naskh, an angular and rather stodgy script that comes from Arabic. And those that don’t like it can go write in Western letters.

Here’s a visual comparison taken from Wikipedia.

Nastaliq v. Naskh. Courtesy Wikpedia.

Looking at the picture, the discerning eye may immediately realize why naskh trumps nastaliq on digital devices. With its straightness and angularity, naskh is simply easier to code, because unlike nastaliq, it doesn’t move vertically and doesn’t have dots adhering to a strict pattern. And we all know how techies opt for functionality.

I’m glad the writer goes further, finding the fascination of a language Romanized (the romance of a language romanized), although makes the following claim which I found odd, emphasis mine:

Writing in Roman letters also makes it easier to switch in and out of English. As an example, take a recent Tweet by the human rights activist Sana Saleem: “If you’ve read my tweets, or my work, I hardly ever cuss. Sorry about that, par bus boat hogaya, buss kardo bass.”

To me, as a writer, that is an astonishing piece of text. Not only are we looking at two languages collapsed into one, but the Romanized part is a language that has not yet been formalized; it is literally under construction due to the pressure exerted by the exigencies of the internet.

The implication that the English language is somehow fully formalized and is protected from the vagaries of the internet is just incorrect - it has been three years since Superlinguo dropped I can has language play on us – but even further, English is still being contested offline. Online is just giving younger people greater sway in that contest.

It’s also not that surprising or astonishing a concept to almost anyone that speaks a second or third language – I presume anyway. As someone that speaks small amounts of three or four other languages, inter lingual word play has always been a source of humour, power and poetry.

Despite this minor quibble, it’s a fascinating insight into the deep search that humans go on when confronted with so much knowledge, leading the author unsuccessfully to the doors of Apple and the, surprisingly successfully, to the doors of Microsoft.

It’s a great reminder of how fragile a language or culture can be - despite the ubiquity of information and knowledge online.

Words, Poetry, Translation and Boredom

For at least a decade my favourite website has been Ubuweb. Not in the visit-it-twice-a-day category like BoingBoing – more like a hot cross bun or a mango – it’s made more special because it’s visited infrequently.

UbuWeb’s main trade is in the otherwise unfindable, the undesirable, the unlistenable, the unreadable – a treasure trove of avant garde artists and their art. And more over. As a long time fan of the avant garde and outsider art, I am constantly shocked at how little I know from within the archive.

There’s the obvious points of reference – Yoko Ono, Dali, Foucault, Kinski, and Cage. Then there’s the less obvious – almost contemporary provocatuer Stewart Home‘s films and music, Ergo Phizmiz, Delia Derbyshire, Hoffman and Rubin, and Guy Debord. Then there’s those that are just plain…well, obscure. Like

If you are feeling overwhelmed I recommend the strategy of finding your birthday within one or both of the 365 Days projects and listening to what you find.

Kenneth Goldsmith is the founder of UbuWeb and MoMA‘s first Poet Laureate, amongst other things, and this interview in The Awl is a must read. Expounding on patchwriting (“post editing” in translation) and plagiarism, poetry, the internet and the new spaces for art he is absolutely mesmerising. In keeping with the theme of the piece, and because you should be reading the whole thing yourself, I’ll only reproduce the juiciest segments.

On his latest book Seven American Deaths and Disasters, a transcription of radio and news reports of national disasters and the peeling back of the media’s façade:

These DJs woke up thinking they were going to the station for a regular day and then they were in the position of having to narrate, say, 9-11 or the Kennedy assassination, to the world. They were completely unprepared and in their speech, you can hear this. It’s stunning. The slick curtain of media is torn, revealing acrobatic linguistic improvisations. There was a sense of things spinning out of control: facts blurred with speculation as the broadcasters attempted to furiously weave convincing narratives from shards of half-truths. Usually confident DJs were now riding by the seat of their pants, splaying raw emotion across the airwaves: smooth speech turned to stutter, laced with doubt and fear. Unhinged from their media personalities, these DJs became ordinary citizens, more like guys in a bar than representatives of purported rationality and truth. Opinions—some of them terribly misinformed—inflected and infected their supposedly objective reportage. Racism and xenophobia were rampant— somehow the DJs couldn’t help themselves.

His latest books were:

(interviewer) Your 2000 book Fidget transcribes every single movement your body made during thirteen hours. In your 2003 book, Day, you chronologically re-typed every single word from every page of a copy of The New York Times. Your later trilogy, Weather, Traffic and Sports, transcribe random radio reports. Now with Seven American Deaths and Disasters you’re transcribing reports of specific events.

On teaching students to copy and steal – plagiarize – to use it as a creative tool:

The students that take my class know how to write. I can hone their skills further but instead I choose to challenge them to think in new and different ways. Many of them know how to plagiarize but they always do it on the sly, hoping not to get caught. In my class, they must plagiarize or they will be penalized. They are not allowed to be original or creative. So it becomes a very different game, one in which they’re forced to defend choices that they are making about what they’re plagiarizing and why. And when you start to dig down, you’ll find that those choices are as original and as unique as when they express themselves in more traditional types of writing, but they’ve never been trained to think about it in this way.

You see, we are faced with a situation in which the managing of information has become more important than creating new and original information. Take Boing Boing, for instance. They’re one of the most powerful blogs on the web, but they don’t create anything, rather they filter the morass of information and pull up the best stuff. The fact of Boing Boing linking to something far outweighs the thing that they’re linking to. The new creativity is pointing, not making. Likewise, in the future, the best writers will be the best information managers.

On words and writing and the change that they have gone through with new technologies:

This is a great challenge to traditional notions of writing. In the digital age, language (aka code) has become materialized, taking on a whole new dimension (although one that had been proposed throughout various avant-garde movements during the twentieth-century: futurisms, concrete poetry, and language poetry, and so forth—which is why the 20th c. avant-garde is more relevant than ever).

Words are no longer just for telling stories. Now language is digital and physical. It can be poured into any conceivable container: text typed into a Microsoft Word document can be parsed into a database, visually morphed in Photoshop, animated in Flash, pumped into online text-mangling engines, spammed to thousands of email addresses and imported into a sound editing program and spit out as music; the possibilities are endless.

On boredom and inspiration:

John Cage said, “If something is boring after two minutes, try it for four. If still boring, then eight. Then sixteen. Then thirty-two. Eventually one discovers that it is not boring at all.” So what is boring? I find narrative boring. I find truth boring. I once wrote an essay called Being Boring where I claim to be the most boring writer who has ever lived. I can’t even read my own books—I keep falling asleep. But they’re great to talk about and think about. So I think we need to redefine our relationship to boring. Reality TV is boring with all the boring parts taken out of it. Instead, go watch An American Family from the early 70s, at this weird moment where mainstream TV fell under the spell of Andy Warhol. You’ll never be bored in the same way again.

I don’t think that journalists can be boring because to do so would be to shed too much truth on what they do. They’re mostly writing boring stuff, they’re bored, their editors are bored, and their readers are also bored, but nobody will admit it. Again, it’s here that Warhol is prescient. When asked if he reads reviews of his works, he replied, that he doesn’t—he only adds up the column inches.

His radio show on WFMU:

(interviewer) I did radio with you at WFMU in the mid-00s. Your radio show, which ran from 1995-2010, seemed to push the format as far as possible. By 2010 you were broadcasting three hours of silence, which you would break every thirty minutes with a station ID. The station staff was often angry with you and the listeners always complained it was the most unlistenable radio imaginable. 

On poetry and writing as a living in an age of advanc(ed/ing) technology – and what “being a writer” means:

…the emerging poet Steven Zultanski just put out what I feel to be perhaps the most important book of his generation called Agony. In the old days, this one book alone would’ve made his career. Now it’s just another in a sea of Lulu publications and Facebook likes.

….

Literary works—and careers—might function the same way that memes do today on the web, spreading like wildfire for a short period, often unsigned and un-authored, only to be supplanted by the next ripple. While the author won’t die, we might begin to view authorship in a more conceptual way: perhaps the best authors of the future will be ones who can write the best programs with which to manipulate, parse and distribute language-based practices. Even if, as Christian Bök claims, poetry in the future will be written by machines for other machines to read, there will be, for the foreseeable future, someone behind the curtain inventing those drones; so that even if literature is reducible to mere code—an intriguing idea—the smartest minds behind them will be considered our greatest authors.

Read through to the end for the easter egg, the master stroke…

Warhol claimed that, “Art is what you can get away with,” something I am inspired by. Artists ask questions, and they don’t give answers. Artists make messes and leave it for others to clean up. I’ve left a long trail of appropriated texts, dishonest statements, and brutal pranks. I’ve stolen things that weren’t mine and have made a career out of forgery and dishonesty. I’m proudly fraudulent. And it’s served me well—I highly recommend it as an artistic strategy.

MetaFilter

From it’s text based beginnings as Bulletin Board Systems/Services (BBSs) and USENET the internet has been used as a place to distribute the weird and wonderful.

Before Digg and Reddit existed, similar offerings were available from MetaFilter (MeFi) and SomethingAwful. I long ago signed up for Digg and Reddit, but for some reason I never really got the hang of MeFi – until recently.

I joined a week or so ago, and I’m pretty impressed so far. Here are a few example of stuff that I’ve found just yesterday:

MeFi’s Learn Korean Easy (Oh, the grammar!) reposts artist/adventurer Ryan Estrada’s great comic called Learn to read Korean in 15 minutes which is fascinating. An internet holes opens up as I go searching for more information on Hangul, the origin of Hangul and it’s promulgator Sejong the Great. I know what I’ll be doing on my next interminable wait at an airport, which from the comments seems to be the place most people like to learn the phonetic alphabet system.

The other post of interest, lighter of my lifeboat, firearm of my loincloths, explains a neat artistic morph of text called the N+7 procedure, developed by French poet Jean Lescure. The rules are simple – change every noun to the seventh noun after it in a dictionary.

The N + 7 Machine is a page that implements the procedure for N <= 15 on text that you enter or paste in.

Go forth and ART!

 

 

 

Mostly about language

I don’t like blogging like this, but it’s hard to find the time with an intermittent Internet. I find titbits, but I rarely follow links – I’ve not watched an online video in almost a year and my inbox has an email thread containing 276 emails with over 400 links to “revisit” once I return to the land of faster bandwidth. As though anyone on the Internet has time for 400+ old links.

However as someone that is interested in language, it behooves me to relay this content that I’ve found.

I don’t know why I have a low opinion of Will Self, but I do. As a self important anarchist I think that I rub up against other self important *ists. Despite this I found his latest piece for the BBC, In defence of obscure words, a rollicking good skewering of the stupid, the vapid, the empty. Be it expressing a love of words and language and using them:

I’d point out that my texts were as full of resolutely Anglo-Saxon slang as they were the flowery and the Latinate. I’d observe that English, being a mishmash of several different languages, had a large and exciting vocabulary, and that it seemed a shame not to use it – especially given that it went on growing all the time, spawning argot and specialist terminology as freely as an oyster does its milt.

or the end result of a culture built by the risk adverse:

But now that all formerly difficult subject matter is, if not exactly permitted, readily accessible, cultural artificers have no need to aim high. The displacement of aesthetically and intellectually difficult art as the zenith has resulted in all sorts of sad and interrelated phenomena.

In the literary world, books intended for child readers are repackaged and sold to kidult ones, while even notionally highbrow arbiters – such as Booker judges – are obsessed by that nauseous confection “a jolly good read”. That Shakespeare remains our national writer is, frankly, bizarre, given that with his recondite vocabulary, myriad historical references, and convoluted metaphorical language, were he to be seeking publication in the current milieu, his sonnets and plays would undoubtedly also be branded as ‘too difficult’.

As for visual arts, the current Damien Hirst retrospective at Tate Modern is a perfect opportunity to see what becomes of an artificer whose impulse towards difficult subject matter was unsupported by any capacity for hard cogitation or challenging artistry. The early works – the stuffed animals and fly-bedizened carcasses – retain a certain – albeit recherché – shock value, while the subsequent ones degenerate steadily to the condition of knocked-off merchandise, making the barrier between the gift shop and the exhibition space evaporate in a puff of consumerism.

But the most disturbing result of this retreat from the difficult is to be found in arts and humanities education, where the traditional set texts are now chopped up into boneless nuggets of McKnowledge, and students are encouraged to do their research – such as it is – on the web.

I quite enjoyed the brief moment of intellectual challenge that he poses.

Which is why I now turn to more a phenomena that really only exists because of the Internet but grew from the old style newsprint tropes “Word of the day”, maybe combined with “What in the world” – the longer form list of obscure, obtuse, unused, hard to translate or extinct words. Usually in groups of five, eight or ten. I’m not immune to posting links these lists here on Pineapple Donut, but it’s not often that it’s done anew – as an infographic and without the pronunciation of the words. And to stick it up to Mr Self, I found it though the most internet of ways – in RSS from a tumblr called this isn’t happiness, via mentalfloss, and then PopSci, to the original artist’s site, 21 Emotions with No English Word Equivalents.

At first I was put off by the filter of emotive words, but I came around as I thought about it – not only was Pei-Ying’s choice considered in that it provided a focus that’s easy to explain, empathise with and understand, but it gave her the opportunity to explore feelings that don’t have words in English, or any other language presumably, but are unique and identifiable to the (ahem, current) internet age. Unfortunately the artist’s site was so popular after the various postings that their broadband limit has been blown, or 509′d in tech speak.

I didn’t know that the Talkly awards even existed, but the Crikey language blog, FullySic, noted that last year it given to Ingrid Piller. Awarded for an individual who has done the most to increase public knowledge about language, she sounds like the person we would most like to be sitting next to on the 6am flight from Nadi to Tarawa.

Cory Doctorow fires up more passion in people than I’d expect – I find him interesting, intelligent and sometimes even enthralling, but the argy bargy that follows him is hard for me to comprehend. He writes for the Guardian on the difference between value and price in the internet era, largely focusing on positive externalities and their exploitation. Most interesting to me is his use of Google and it’s approach to translating.

A positive externality arises when you do something you want to do that also makes life better for someone else. For example, if you drive your car slowly and carefully to avoid a wreck, a positive externality is that other users of the road have a safer time of it, too. If you keep up your front garden because it pleases you, your neighbours get the positive externality of slightly buoyed-up property values from living on a nicely kept street.

Positive externalities — virtuous cycles — are all around us. Your kid learns to speak because of all the people around her who carry on conversations and because of the TV shows and radio programmes where speaking occurs (as do immigrants like my grandmother, whose English fluency owes much to daytime TV after she came to Canada from Russia).

Google is a case-study in harvesting positive externalities. It offered a free, voice-based directory assistance number, and used the interactions users had with its software to build a corpus of common phrases, expressed in multiple accents and under a wide range of field conditions. Then it used this to train the voice-recognition software that powers its Android-based phone-search. Likewise, it mined all the publicly available translations on the web – EU documents that appeared in multiple languages, fan-based translations for subtitles on cult cartoons, and everything else it could find – and used this to train its automated translation engine, providing it with the context that it needed to figure out the nuance and sense of ambiguous phrases.

He contends that the defining mania of the internet era is

resentment over positive externalities. Many people and companies have concluded that if someone, somewhere, is getting value from their labour, that they should get a cut of that value… Many people have accused Google of “ripping off” the public by indexing content, or analysing it, or both. Jaron Lanier recently accused Google of misappropriating translators’ labour by using online translated documents as a training set for its machine-translation engine – an extreme version of many labour-oriented critiques of online business.

leading to

the infectious idea of internalising externalities turns its victims into grasping, would-be rentiers. You translate a document because you need it in two languages. I come along and use those translations to teach a computer something about context. You tell me I owe you a slice of all the revenue my software generates. That’s just crazy. It’s like saying that someone who figures out how to recycle the rubbish you set out at the kerb should give you a piece of their earnings. Harvesting positive externalities involves collecting billions of minute shreds of residual value – snippets of discarded string –and balling them up into something big and useful.

While I enjoy his take, either he or Lanier has missed the mark. If Lanier’s critique was purely about the Google Translation Toolkit it would be understandable, but as is pointed out in the comments – the EU have made the translations available for exactly that purpose. Similarly, all the Free and Open Source software translation files have been there in the public domain waiting to be harvested since the movement started in the early 1990s – it was just a matter of someone thinking to harvest the files, and having the hardware and technical expertise to do so. And indeed, those files remain open source – someone else is welcome to harvest the same files. Google hasn’t locked them up. The Translation service on the other hand, asking for Translator’s Translation Memories and storing them – that is taking other people’s work. I guess the question then becomes can Google guarantee that they haven’t used those TMs in their translation service.

Finally, for the real language nerds, Matt Might’s The language of languages is a healthy, if slight, refresher on context free grammars:

Languages form the terrain of computing.

Programming languages, protocol specifications, query languages, file formats, pattern languages, memory layouts, formal languages, config files, mark-up languages, formatting languages and meta-languages shape the way we compute.

So, what shapes languages?

Grammars do.

Grammars are the language of languages.

Behind every language, there is a grammar that determines its structure.

This article explains grammars and common notations for grammars, such as Backus-Naur Form (BNF), Extended Backus-Naur Form (EBNF) and regular extensions to BNF.

The discussion on context sensitive grammars and parsing is poorly explained to my mind, in need of more explanation  and the article in general could be more interesting to the non computer scientist with a little more work. A primer only really.

One step at a time: Google plays the longest game around

Slashdot has informed me of a HuffPo piece by Found in Translation co-author Nataly Kelly about Google hiring Ray Kurzweil, potentially the world’s most eccentric dork.

The beauty of the web shines through when a commentator can sum it up and extrapolate better than the original post:

You need to investigate the entire initiative Google is spearheading around its acquisition of Metaweb. They are building an ontology for human knowledge, and are ultimately building the semantic networks necessary for creating an inference system capable of human level contextual communication. The old story about the sad state of computers’ contextual capacity, recounts the story of the computer that translates the phrase “The spirit is willing, but the flesh is weak.” from English to Russian and back and what they got was “The wine is good but the meat is rotten.”

The new system won’t have this problem. Because it will instantly know about the reference coming from the Bible. I will also know all the literary links to the phrase, the importance of its use in critical historical conversations, The work of the Saints, the despair of martyrs, in short an entire universe of context will spill out about the phrase and as it takes the conversational lead provided by the enquirer it will dance to deliver the most concise and cogent responses possible. In the same way, It will be able to apprehend the relationship between a core communication given in context ‘A’ and translate that conversation to context ‘B’ in a meaningful way.

Ray is a genius for boiling complex problems down into tractable solution sets. Combine Ray’s genius with the semantic toy shop that Google has assembled, and the informational framework for an autonomous intellect will become. The real question is how you make something like that self aware. There’s a another famous story about Helen Keller, before she had language. symbolic reference, she lived like an animal. Literally a bundle of emotions and instincts. One moment, one utterly earth shattering moment there was nothing, then Annie Sullivan her teacher placed her hand in a stream of cold water and signed water in her palm. Ellen understood… water. In the next moment Ellen was born as a distinct and conscious being, she learned that she had a name, that she was. I don’t know what that moment will look like for machines, I just know its coming sooner than we think. I also can’t be certain whether it will be humanities greatest achievement or our worst mistake. That awaits seeing.

The Endangered Language Project

I’m sure I’ve posted about this mob before, or a very similar project – unfortunately I’m not in a position to search through my posts to find where I might have, so in the meantime: The Endangered Language Project.

Google has had a role in developing this project and has a press release up now:

The Endangered Languages Project, backed by a new coalition, the Alliance for Linguistic Diversity, gives those interested in preserving languages a place to store and access research, share advice and build collaborations. People can share their knowledge and research directly through the site and help keep the content up-to-date. A diverse group of collaborators have already begun to contribute content ranging from 18th-century manuscripts to modern teaching tools like video and audio language samples and knowledge-sharing articles. Members of the Advisory Committee have also provided guidance, helping shape the site and ensure that it addresses the interests and needs of language communities.

Google has played a role in the development and launch of this project, but the long-term goal is for true experts in the field of language preservation to take the lead. As such, in a few months we’ll officially be handing over the reins to the First Peoples’ Cultural Council (FPCC) and The Institute for Language Information and Technology(The LINGUIST List) at Eastern Michigan University. FPCC will take on the role of Advisory Committee Chair, leading outreach and strategy for the project. The LINGUIST List will become the Technical Lead. Both organizations will work in coordination with the Advisory Committee.

As part of this project, research about the world’s most threatened languages is being shared by the Catalogue of Endangered Languages (ELCat), led by teams at the University of Hawai’i at Manoa and Eastern Michigan University, with funding provided by the National Science Foundation. Work on ELCat has only just begun, and we’re sharing it through our site so that feedback from language communities and scholars can be incorporated to update our knowledge about the world’s most at-risk languages.

Building upon other efforts to preserve and promote culture online, Google.org has seeded this project’s development. We invite interested organizations to join the effort. By bridging independent efforts from around the world we hope to make an important advancement in confronting language endangerment. This project’s future will be decided by those inspired to join this collaborative effort for language preservation. We hope you’ll join us.