I began reading In the Land of Invented Languages today after hearing about it on Lambda the Ultimate. Currently, I am reading about John Wilkinson's failed attempt (one of, apparently, many) to build a philosophical language. Like several readers at LtU, my mind turned to its application to programming. Like the noble readers of that blog, I feel that the correlation between constructed languages (from Elvish and Klingon to Esperanto) and programming languages is a strong one. The irony is that the latter has gained more traction than the former. Many constructed languages, like Wilkinson's, are based around the idea that ambiguity should be removed from language. In programming, it is not a matter of taste. Ambiguity must and, eventually, is removed. In complex languages like C++ (which, I assert, is complex in entirely the wrong way but that is a post for another time), it may be unclear from a spec how a feature should be implemented, but the implementors ultimately make some decision. So, we have dialects: Visual C++, GNU C++, Borland C++, etc ad nauseum. In human language, however, ambiguity is not neutral. It is actually a positive. Literature and poetry revel in the ambiguity of language, in puns and rhymes and all those stupid idiosyncrasies. John Wilkinson would probably have made one heck of a programmer.
Arika Okrent, the author of In the Land of Invented Languages points out that Wilkinson's language was a great linguistic study and completely unusable as a spoken tongue. She is right. A language that is unfit for human speech is not necessarily worthless. As evidence, look at the myriad of computer languages available. These are all useful (well, almost all), but you would never catch me speaking to a person in C#, Java, PHP, Lisp, or what have you. The philosophical language is the kind of thing that computers love. Lacking in ambiguity, with new concepts as simple as placing a stub in a massive dictionary. The understanding comes almost for free. A great deal of effort has gone into trying to get machines to understand human language. At the current stage of development this is a lost cause. Hopefully it will not always be, but right now our combination of machine and algorithm cannot untangle the ambiguities of human speech. The example one of my computer science professors used was how a machine would figure out the meaning of the phrase "fruit flies like a banana". Is it that flies, fruit flies in particular, enjoy bananas? Or that fruits fly through the air as a banana would?
The philosophical programming language might be the next step. True, it might be a little harder than picking up BASIC or PHP, but it would be a great deal more expressive. I know. This also sounds like it is approaching the heresy of building a DSL and expecting random business personnel to do their own programming. That's not really what I have in mind. The programming would still have to be done by programmers--but more of a dictation and less of a description. As I looked at the excerpts from Wilkinson's tables it reminded me strangely of Prolog predicate evaluation. It would be easy to represent his whole vocabulary as a sequence of facts in the opening of a Prolog program. With a nonambiguous grammar, the whole thing could be parsed, understood, and executed.
To the best of my knowledge, this has never been tried. I would love to see a first shot at it, wrinkles and all. Give me a shout if you know of or are working on something like this.