• Welcome to Jekyll!

    You’ll find this post in your _posts directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run jekyll serve, which launches a web server and auto-regenerates your site when a file is updated.
  • Announcing Blutwurst 0.5

    This one really isn't new - I cut the release over a month ago. Blutwurst is a test data generation program. It is written in Clojure and offers a ready way to create data for use in a database or in unit tests. I plan to write a longer introduction to the application later. There is a more detailed changelog entry at https://github.com/michaeljmcd/blutwurst/blob/master/CHANGELOG.md but the big change was the addition of JSON Schema parsing to the options for developing a schema and XML as an output format. https://github.com/michaeljmcd/blutwurst/releases/tag/v0.5.0
  • Announcing spiralweb 0.3

    I noticed the other day that my Literate Programming system was not Python 3 compatible. As this is generally unacceptable to me, I updated Spiralweb for the first time in a few years. There really aren't any changes besides the port. PyPI entry: https://pypi.python.org/pypi/spiralweb Github: https://github.com/michaeljmcd/spiralweb/releases/tag/v0.3
  • Announcing mm2tiddlywikitext v0.1

    I'm trying to catch up on writing posts about some of the open source stuff I've been building out lately. This is an older one, but still useful (I hope) to someone. mm2tiddlywikitext is a stylesheet that reformats a FreeMind mind map as TiddlyWiki bulleted text. My main use-case for this is importing my mind maps in a searchable way into TiddlyWiki. The easiest option is to use it directly from within FreeMind. Go to File > Export > Using XSLT.... In the dialog that pops up, provide the path to stylesheet.xslt and an output location. This will create a TiddlyWiki 5 JSON file that can be imported into TiddlyWiki. Release link: https://github.com/michaeljmcd/mm2tiddlywikitext/releases/tag/v0.1
  • Importing FreeMind Mind Maps into TiddlyWiki 5

    One of the largest improvements I have made to my personal development workflow is keeping a commonplace book of all the things I have been tinkering with, for "fun" and for work. The process has worked best since I started keeping it in a TiddlyWiki, a nice digital format. This may be worth a post at another time, though at least one post has already been made. I still use FreeMind mind maps when doing brainstorming or freewheeling research when things are very much in flux. I tend to love mind maps during this phase, but don't find them as attractive a tool for longer term knowledge management. This usually leads me to the point where I want to import my notes into TiddlyWiki. Now, it is entirely possible to simply export an image or HTML page from FreeMind and add the file to TiddlyWiki. It is also possible to attach the raw .mm file. In some cases, this may even make sense. Sometimes, however, it would just make more sense to dump it as an outline in wiki text format. To help with this, I have created an XSLT stylesheet (mostly because I've never done real, dedicated work with XSLT) that can be used fairly readily. It is on github at https://github.com/michaeljmcd/mm2tiddlywikitext under an MIT license. One of these days I might package it into a better standalone utility. Maybe not. We'll see.
  • Creating Integration Tests with JNDI

    Technically speaking, an automated test that requires JNDI is not a unit test. As an aside, it is preferrable to segregate the portions of the application accessing JNDI to leave as much of the application unit testing as possible. Nevertheless, if JNDI is used, something must ultimately do it and it is preferrable to be able to test this code prior to its us in production. So far, the best starting point for me has been to use Simple JNDI to create the provider and allow the rest of the code to work unimpeded. The original Simple JNDI project decayed a little bit. An update with bug fixes is available on github with a different group id (https://github.com/h-thurow/Simple-JNDI). I also used H2's in-memory database so that I could put real connection information in the test case. To get started, I added these dependencies to my pom.xml: com.github.h-thurow simple-jndi 0.12.0 test com.h2database h2 1.4.192 test It took some trial and error to get the configuration right, which is one of the motivations for this writeup. The first thing you need is to add a jndi.properties file to your test resources. The settings I will discuss below were chosen to emulate the Tomcat Server setup that we use here. java.naming.factory.initial = org.osjava.sj.SimpleContextFactory org.osjava.sj.root=target/test-classes/config/ org.osjava.sj.space=java:/comp #org.osjava.sj.jndi.shared=true org.osjava.sj.delimiter=/ Notice that the root is given relative to the pom. This is one thing that caused me a great deal of grief on the first pass. Another important element was the use of the space option, which was needed to emulate Tomcat's environment. Within the test resources, I added a config folder, containing a single file: env.properties. This latter file had contents like the following: org.example.mydatasource/type=javax.sql.DataSource org.example.mydatasource/url=jdbc:h2:mem: org.example.mydatasource/driver=org.h2.Driver org.example.mydatasource/user= org.example.mydatasource/password= The data source in question did indeed have a dotted name, which only added to some of the initial confusion. This allowed my code-under-test to work as anticipated. For reference's sake, this is what the Java code looked like: private DataSource retrieveDataSource() { try { Context initContext = new InitialContext(); Context envContext = (Context)initContext.lookup("java:/comp/env");
  • Some Additional Thoughts on Large ebook Conversions

    I absolutely love exploring books and acquiring new reading material. The quest for more reading material has often lead me all over the public domain-loving internet looking for obscure texts. Gutenberg, the Internet Archive, CCEL and Sacred Texts are among my favorite haunts. I often find myself attempting to convert texts for display on my nook SimpleTouch (this older piece of tech is probably worth its own post at some point). Calibre is, of course, a natural tool of choice, but I have found something odd: when dealing with larger texts, especially those of a more technical nature (as opposed to general fiction), Calibre has very limited options for taking the book from plain text to a formatted version. Most of the options it does present are based heavily on Markdown. This design choice is a reasonable one, but often breaks down for texts that are not sufficiently close to Markdown. One of my recent conversions is an excellent example of this. I have been looking for good concordances of the Bible for my ereader to help with Bible study and general writing when all I have is a paper notebook and my Nook. It turns out that the options for concordances in either the Barnes and Noble or Amazon stores are relatively limited. So, I turned to CCEL and was attempting to convert "Nave's Topical Bible." When attempting to convert from plain text, one of the biggest difficulties is structure detection. If you look at the Calibre documentation on structure detection ( https://manual.calibre-ebook.com/conversion.html#structure-detection), one of the more obvious things is that the chapter detection occurs after a book has been converted to HTML. There are effectively no options to control structure detection in the conversion from plain text to HTML. What I wound up doing was falling back on the old txt2html tool, which has some more complete options than those in Calibre. I ended up using commands like the following to convert to HTML manually. $ txt2html -pi -pe 0 ttt.txt -H '^[A-Z]$' -H '^\s\s\s[A-Z][A-Za-z- ]+$' > ntt.html This approach isn't all gravy. It requires some manual tinkering to find good regexes for each individual book. Moreover, different books require different regexes. Here is another example from a book I converted. $ txt2html -pi -pe 0 ntb.txt -H '^[A-Z]$' -H '\s\s\s[A-Z-]+$' > ntb.html In some cases, I even had to add a level of headers for use in the books.
  • Reading the Classics: JavaScript - the Good Parts

    I read Douglas Crockford's JavaScript: The Good Parts over the long weekend. My interests were partially pedagogical. A huge part of my current job involves mentoring more junior developers and helping them to learn. One of my interests was whether Crockford's book would be a good resource to pass along. It seems like a silly question. The book is still the book best known for putting JavaScript on the map as a quasi-respectable language. I learned JavaScript the way I have learned most languages which is to say, in the school of hard knocks. I had learned most of the good and bad parts of JavaScript through trial, error, Google and JSLint. The book wasn't a bad refresher, but there weren't a lot of new insights, either. The timing makes the question genuinely difficult. ECMAScript 6 is out and some of the features are available now, more are available through polyfill and most of the rest could be acquired through Babel (https://babeljs.io/). The book is, in a sense, almost obsolete. Almost. A good example of what I mean is prototypical inheritance. Crockford spends a lot of time explaining how class-based inheritance is different than prototypical inheritance. This is still relevant. JavaScript has remained a prototypical language. He also shows several techniques to make the meaning of JavaScript programs more clear to those used to classical object-oriented programming. This part is less relevant. Today, I would reach for the ES6 class syntactic sugar (via Babel for the browser) or CoffeeScript rather than layering some manual code over the prototype. Similarly, Crockford discusses variable scoping at length. Again, this is partially relevant. The base scopes haven't changed and they still easily trip up many programmers (I have used questions about JavaScript scopes to trip people up on interviews). These things need to be understood. However, the let statement in ES6 does provide some clearer semantics and its use should be encouraged as ES6 becomes widely available. There are also new parts which, good or bad, require some level of explanation. Generators and promises will require explanation and, of course, categorization as either good or bad parts. This sort of thing leaves me in a place where I want to recommend Crockford, but I feel the need to add some general commentary. Hopefully, a second edition of JavaScript: The Good Parts will come out in the next few years and make these little oddities of time go away.
  • Weighing in on JavaScript Package Managers

    I have quite recently begun work on an open source project with a node back-end and front-end work planned to be done in React. This is my first full effort to work with the latest and greatest in JavaScript tooling. We use Ext JS and Sencha Cmd at work and, whatever else you want to say about that stack, it is different. My last full blown front-end development was before the real Node boom and I pretty much did it the old fashioned way -- namely, downloading minified JavaScript by hand and referencing it in my markup (shaddup ya whippersnappers). JavaScript saw a real explosion in package managers a few years ago, which was a natural place to go from a growing ecosystem to have none. Market forces naturally took over and many of the earlier examples have been culled out of existence. There are really two main options at this point: NPM and Bower. Bower has enjoyed a healthy following, but (by my entirely unscientific survey), it appears as though the NPM uber alles faction within the JavaScript world is growing stronger. The sentiment is echoed in other places, but http://blog.npmjs.org/post/101775448305/npm-and-front-end-packaging gives a good overview of the fundamental syllogism. It basically goes that package management is hard, NPM is large and established, so you should use NPM everywhere rather than splitting package managers. The argument has a certain intrinsic appeal - after all, the fewer package managers, the better, right? The real problem is that it is possible to use NPM as a front-end package manager, but it is deeply unpleasant. Systems like Browserify and Webpack are needed to prepare dependencies for usage on the front-end. These are complex and, to a degree, brittle (I ran into https://github.com/Dogfalo/materialize/issues/1422 while attempting to use Materialize with an NPM application). Even if one assumes that every package can ultimately be Browserified (and it doesn't seem like an overly-optimistic assumption), the effort seems to be pure waste. Why would I spend time writing complex descriptors for modules on top of their existing packages? For it's shortcomings, Bower seems more robust. I spent a few hours fiddling with Browserify and Materialize without much success (although I think I do see how browserify would work now), but mere minutes wiring up Bower. This does not get into the fact that Browserify/Webpack require additional information to extract CSS, images and web fonts. Even when things are working, it would require constant effort to keep it all up to date. At the moment, NPM, even NPM 3, simply does not have good answers for setting up front-end development. The NPM proponents really, in my opinion, need to focus on making front-end modules more effective rather than pushing tools that are little more than hacks, like Browserify and Webpack. At this point, I am just going to rock out with Bower. Maybe someday I will be able to trim out Bower -- but I would rather spend time coding my application than giving NPM some TLC.
  • Converting Large Text Files to epub with Calibre

    I spent some time debugging a long-standing issue I have had using Calibre to convert large text-documents to epubs for viewing on my nook. The normal course of events was that I would feed a large (multi-megabyte--the example I was debugging with was 5.5 MB) text document into Calibre and attempt to convert it to an epub with the defaults. After a lot of churning, Calibre would throw a deep, deep stack trace with the following message at the bottom: calibre.ebooks.oeb.transforms.split.SplitError: Could not find reasonable point at which to split: eastons3.html Sub-tree size: 2428 KB I have long been aware that large HTML documents have to be chunked for epub conversion, although I do not claim to know whether this is mandated in the spec, or allowed and needed as a technical requirement for individual readers. In either event, Adobe Editions devices, like the nook, require chunks of 260 KB. The error is clear in this light. For some reason, Calibre was unable to create small enough chunks to avoid issues. My working assumption had been that Calibre would chunk the files at the required size. So, every 260KB, give or take a bit to find the start of a tag, would become a new file. The default, however, is to split on page breaks. Page break detection is configurable, but defaults to header-1 and header-2 tags in HTML. When your document is in plain text, as opposed to Markdown or some such, few, if any, such headers will be generated. This can cause Calibre to regard the entire document as a single page, which it cannot determine how to split into smaller files. Converting a large, plain-text document to Markdown or HTML by hand is a task that is much too manual for someone who simply wants to read an existing document. My approach was much more straightforward. What I did was change the heuristic used to insert page breaks. On the Structure Detection tab (when using the GUI), there is an option entitled "Insert page breaks before (XPath Expression):". I replaced the default (which was the XPath for H1 and H2 tags) with the following: //p[position() mod 20 = 0] This will insert a page break every 20 paragraphs. The number was utterly arbitrary. Because paragraphs are usually well-detected, this worked fine. My large 5.5 MB file, a copy of Easton's Bible Dictionary from CCEL, converted as expected.
  • Features & Identity

    Recently, I was reading the Wall Street Journal's article about Facebook working to incorporate a Twitter-style hashtag in its platform (Source: http://online.wsj.com/article/SB10001424127887323393304578360651345373308.html). The article has comparably little to say. Like most mainstream treatments of technology, it is mostly a fluff piece, but one thing caught my eye. The writer, and, most likely, Facebook itself have lost sight of vision while staring at features. Twitter's hashtag concept works because Twitter is built as a broadcast system. What I say to anyone, I say to the world. So, cross-referencing user posts by tag gives me an idea as to what everyone on Twitter has to say about a specific topic. Facebook is not, by design, a broadcast system. It really does aim to be more of a social network. When I use Facebook, the focus is on the set of people that I know. The cross-referencing idea has very limited usefulness in the echo chambers of our own friends, family and acquaintances. For better or worse, we probably already know what they think. Both Twitter and Facebook need to concentrate on vision, especially the latter, which seems to have the larger share of feature envy. The focus is not on hashtags. It is on whether I want to communicate with a circle of friends or broadcast to the whole world. In all honesty, there is room for both provided that they can find a way to monetize the affair. This has actually been the sticking point for all social networks so far. They get big, they get popular and they do so with venture capital. Then they collapse when their growth can no longer be maintained. Therein lies the sticking point: coming up with a social networking concept that accomplishes the members' goals in a sustainable way (and, yes, that means making money).
  • SpiralWeb v0.2 Released

    SpiralWeb version 0.2 has just been released. I felt the urge to scratch a few more itches while using it for another project. As with version 0.1, it can be installed from PyPi using pip. The changelog follows: == v0.2 / 2012-10-08 * bugfix: Exceptions when directory not found * bugfix: PLY leaks information * bugfix: Create version flag * bugfix: Top level exceptions not handled properly * bugfix: Exceptions when chunk not found * bugfix: Pip package does not install cleanly * Change CLI syntax * Cleanup default help Also of note is the fact that the source code has been moved over to github: https://github.com/michaeljmcd/spiralweb. Now, off to bed. I have to get to the gym in the morning.
  • Announcing SpiralWeb version 0.1

    Version 0.1 of SpiralWeb is available for download at http://pypi.python.org/pypi/spiralweb/0.1. To install, make sure that you have Python and pip, then run pip install spiralweb to download and install. The project home page can be found at https://gitorious.org/spiralweb. About SpiralWeb SpiralWeb is a Literate Programming system written in Python. Its primary aims are to facilitate the usage of literate programming in real-world applications by using light weight text-based markup systems (rather than TeX or LaTeX) and painlessly integrating into build scripts. It is language agnostic. The default type-setter is Pandoc’s extended markdown, but new backends can be readily added for any other system desired. For more information on literate programming, please see literateprogramming.com. Usage The syntax is minimal: @doc (Name)? ([option=value,option2=value2...])? Denotes a document chunk. At the moment, the only option that is used is the out parameter, which specifies a path (either absolutely or relative to the literate file it appears in) for the woven output to be written to. @code (Name)? ([option=value,option2=value2...])? Denotes the beginning of a code chunk. At present, the following options are used: out which specifies a path (either absolutely or relative to the literate file it appears in) for the tangled output to be written to. lang which specifies a language that the code is written in. This attribute is not used except in the weaver, which uses it when emitting markup, so that the code can be highlighted properly. @ Within a code chunk, this indicates that another chunk will be inserted at this point in the final source code. It is important to note that SpiralWeb is indentation-sensitive, so any preceding spaces or tabs before the reference will be used as the indentation for every line in the chunks output--even if there is also indentation in the chunk. @@ At any spot in a literate file, this directive results in a simple @ symbol.
  • Putting it all Together: Reverting a Single Commit in Git

    First, we locate the patch number with git log. Then we dump it out as a patch git format-patch -1 05bc54585b5e6bea5e87ec59420a7eb3de5c7f10 --stdout > changes.patch (note that the -1 switch limits the number of patches...by default, git-format will pull many more.) Once we know that we have the patches that we wish to roll back, we run this command: git apply --reverse --reject changes.patch Finally, we commit the reverse-changes. The big thing is the -1 switch on git format. Many of the articles I found were pulling a large number of patches and I did not need it.
  • In Search of C# Omnicomplete for Vim

    By day, I write in C#, mostly on a stock .NET install (version 4, as of this writing; I expect that the principles laid out here will transfer forward as the Vim ecosystem is fairly stable). I often find myself switching back and forth between Visual Studio 2010 (with VsVim) and gvim 7.3. Frankly, I should like to spend more time on the gvim side than I do. While a great deal of time and effort has gone into customizing my vimrc for .NET development, I often find myself switching back to VS in order to get the benefits of Intellisense when working with parts of the very hefty .NET Framework that I do not recall by memory. Every so often, I do some fishing around for something useful to make my Vim Omnicomplete more productive. In this post, I will layout my newest attempt and analyze the findings. As such, this post may or may not be a tutorial on what you should do. In any event, it will be a science experiment in the plainest sense of the word. First, the hypothesis. While checking out the Vim documentation on Omnicomplete, we see that the Omnicomplete function for C makes heavy use of an additional tag file, generated from the system headers [http://vimdoc.sourceforge.net/htmldoc/insert.html#ft-c-omni], and that this file is used in conjunction with what Omnicomplete understands about the C programming language to make a good guess as to what the programmer likely intends. It should be possible then, with minimum fuss, to generate a similar tag file for C#. It may also be necessary to tweak the completion function parameters. We will look at that after we have checked the results of the tag file generation. It turns out that Microsoft releases the .NET 4 Framework’s sourcecode under a reference-only license [http://referencesource.microsoft.com/netframework.aspx]. The initial vector of attack will be to download the reference code, and build a tag file from it (this seems well in keeping with the intent behind the license—if this is not so, I will gladly give up the excercise). The link with the relevant source is the first one (Product Name, ".NET" and version of "8.0" as of this writing). The source was placed under RefSrc in Documents. After running: ctags -R -f dotnet4tags * in the RefSrc\Source\.Net\4.0\DEVDIV_TFS\Dev10\Releases\RTMRel directory, we got our first pass at a tag file. A little googling prompted the change to this[http://arun.wordpress.com/2009/04/10/c-and-vim/]: ctags -R -f dotnet4tags --exclude="bin" --extra=+fq --fields=+ianmzS --c#-kinds=cimnp * Then, as the documentation says, we added the tag to our list of tag files to search: set tags+=~/Documents/RefSrc/Source/.Net/4.0/DEVDIV_TFS/Dev10/Releases/RTMRel/dotnet4tags When this is used in conjunction with tagfile completion (C-X C-]) the results are superior to any previous attempts, particularly in conjunction with the Taglist plugin [http://www.thegeekstuff.com/2009/04/ctags-taglist-vi-vim-editor-as-sourece-code-browser/]. With this alone, we do not get any real contextual searching. For example, if we type something like: File f = File.O and then initiate the matching, we get practically any method that begins with an O regardless of whether or not said method is a member of the File class. If we stop here, we still have a leg up over what we had before. We can navigate to .NET framework methods and fetch their signatures through the Taglist browser—but we would still like to do better. The only reason the resulting tagfile is not here included, is that it is fairly large—not huge, but much too large to be a simple attachment to this post.
  • A Quick Note on Building noweb on Cygwin

    My laptop has bitten the dust. Until I have the chance to open it up and see if the damage is fixable, I have been borrowing my wife's computer to tinker (to her annoyance, I'm sure, but she used my laptop until we replaced the desktop so all's fair). I was going to install noweb on cygwin, and hit the following error on build: In file included from notangle.nw:28: getline.h:4: error: conflicting types for 'getline' /usr/include/sys/stdio.h:37: error: previous declaration of 'getline' was here getline.h:4: error: conflicting types for 'getline' /usr/include/sys/stdio.h:37: error: previous declaration of 'getline' was here As I had built noweb before, this error struck me as a little strange. It turns out, that in stdio.h, Cygwin includes its own definition of getline, unlike on standard Unix-likes. A quick googling turned up that this was not unique to noweb, but that other packages had encountered similar difficulties. The answer that worked for me is here: http://ftp.tug.org/mail/archives/pdftex/2006-February/006370.html In sort, all one has to do is open /usr/include/sys/stdio.h and comment out the line that reads: ssize_t _EXFUN(getline, (char **, size_t *, FILE *)); For safety's sake, I reinstated the line after installing noweb and everything seems to be running fine.
  • Literature Review: PEGs

    Parsing Expression Grammars, or PEGs, are syntax-oriented parser generators, meant to ease the task of building rich programming languages. I had had the opportunity to tinker with PEGs sparingly and, finally, I got around to reading the original paper (available here: http://pdos.csail.mit.edu/~baford/packrat/popl04/). My reading notes from the paper can be downloaded here: http://www.mad-computer-scientist.com/blog/wp-content/uploads/2011/06/peg.html I am fully aware that this is not, as it were, a new paper. It came up originally in my searches for a good parsing library in Common Lisp. For the project that it was intended for, I ultimately moved on to using Ometa. While Ometa is a fine system, it actually did not win on power grounds because, quite simply, I do not need the extra expressiveness for what I am working on. It won out because the implementation was better than the PEG library I had tried. As it is kind of old territory, my review has little to say. In reality, when I first ran across PEGs I felt strangely out of the loop, but here goes anyway: PEGs are a powerful mechanism for defining parsing grammars. The form of the language itself is similar to standard EBNF in its general layout, but allows native creation of grammars. It avoids the ambiguities inherent to Context Free Grammars by using prioritized selection of paths through the grammar. As a result, it is actually more powerful than traditional CFGs while being simpler to use. While PEGs seem to have also caught on a lot better across than its predecessors (discussed in the paper), the seem to receive less notice than Ometa, which further builds on PEGs.
  • How WPF gets GUI programming right

    WPF is another in a long line of Microsoft UI related technologies, each promising more than the one before. WPF is basically Silverlight for the desktop (or, if you prefer, Silverlight is WPF for the web). We have been building an application in WPF as of late at my place of employment, and I'd thought I'd post what I thought that WPF does right. The biggest thing is that WPF builds UIs declaratively. I cannot stress enough how important I think this really is. The biggest pain about using Java's Swing framework was writing long sequences of code that initialized controls in a form's constructor. Under the hood, Windows forms works pretty much the same way. The biggest difference is that Microsoft ships a nice designer with Visual Studio, so the raw kludginess of the approach is hidden from most programmers, since they look at everything through the lens of the designer. The declarativeness goes beyond simply allowing one to declare that widgets will exist to their layout (via the Grid mechanisms--really, these should be used by default and the designer left on the shelf) and their data flow. The latter is particularly interesting. ASP.NET has data binding, but the version employed by WPF is far more sophisticated. When I jumped back to an ASP.NET project, I immediately found myself missing the power of WPF databinding, but to add it to a web framework would unquestionably require a continuation based framework like the one employed by Weblocks or Seaside. The importance here is that both the interface and how it interacts with data can be declared. Many GUI designers and markup languages have come along that allowed one to declare the layout, but few, if any, mainstream GUI designers have allowed so much expressiveness. The hard part about all this, is that C# is a statically typed language and, as a result, a lot of these features are based heavily on reflection which is a performance hit, due to the fact that the JIT compiler cannot really optimize these things. Perhaps it was just my imagination, but I feel pretty sure that WPF applications lag behind their windows forms cousins in terms of speed. All in all, though, WPF is a fine framework, though.
  • Polymorphism, Multiple Inheritance, & Interfaces...Pick 2.

    The title for this post comes from a statement that was brought up by a  coworker as having been said to him. The overall point of this post will be simple: given that choice, your answer should be obvious: you want polymorphism and multiple inheritance, because there is nothing that you can do with interfaces that you cannot do with multiple inheritance. Interfaces provide two things, depending on their use: a form of multiple inheritance in languages that do not otherwise support it and design-by-contract capabilities. Clearly, in the former case, you are better off with multiple inheritance, as you receive the full power of the feature. In the latter case, it is trivial to create an almost-empty class that acts as an interface, if that is the effect you are after. The main objection raised was the counter example: what if you have a class Animal and another class Plant. Surely you do not want a programmer to inherit from both? That would not make sense. To which I would answer Why not? If it makes sense to whomever wrote it, why prevent it? They might, after all, be creating something for the little shop of horrors. Largely, I  think the thinking that interfaces are somehow superior to multiple inheritance comes from never having used multiple inheritance in a system built from the ground up to support it (like CLOS in Common Lisp) as multiple inheritance strictly supersedes interfaces.
  • The Literature

    Looking back at my last few posts, something occurred to me: a lot of the more exotic focus of this blog has been lost. While I enjoy examining MVVM and QuickBooks, one of the whole points of this blog was to offer a fusion between useful code monkey concepts and computer scientist (hence, the domain name of this site). Lately, there has not been much "scientist" at the mad computer scientist. One of my new series of posts is going to be literature reviews. I have a massive reading list of computer science papers queued up as well as some other materials. In these posts, I will read a journal article or watch a lecture and post my notes and thoughts about it. The first one will be coming soon, so look out for it.
  • More on Microblogging and Programming

    I had been rolling around some thoughts on microblogging and programming since my last blog post. First of all, I found it interesting that Twitter started life as an internal project before getting VC funding. This reenforces, to me, the value of what I as saying, which is that microblogging for more limited audiences and topics is more useful than the present day and age where we have people microblogging about brushing their teeth. I have also been interested in doing more work on Sheepshead. According to gitorious, my last commit was over a month ago. Such are the results of having a family, a job, and a life--but I really want to get back to working on it. As I start gearing it all up again, I have decided to try a little experiment. Instead of simply waiting on someone else to try out microblogging for a small development team, I am going to try to bootstrap a small team while microblogging. As I develop Sheepshead and push it forward, I am going to try and use microblogging to mull over design decisions and announce progress. The service I have decided to use for this endeavor is Identi.ca (you can see the stream here), rather than the more ubiquitous Twitter. I did this for a few reasons, chief among them being that I expect there to be more engineering types as well as more open source-minded individuals on Identi.ca. Another important consideration is that Identi.ca allows its users to export data. My intention is to keep backups of the information on the feed, so that if something were to happen to Identi.ca and the project attained a meaningful size, a StatusNet instance could be setup, even if only as a stopgap. We will see how this all goes (or if it does--I can definitely see how Sheepshead is sort of a niche development). In the mean time, I am going to try and get some code written.
  • Linq to Sql is not fit for GUI Applications

    The title is a little incendiary, I admit, but I think it is a good place to start. We are building a database-driven application with WPF (using MVVM) & Linq to SQL and, in the process, a few caveats about Linq to SQL have come out in a truly fine way. The issues all revolve around that little innocuous thing known as a DataContext. For those of you who may not be familiar with the idea, in Linq to SQL a DataContext is "the source of all entities mapped over a database connection. It tracks changes that you made to all retrieved entities and maintains an "identity cache" that guarantees that entities retrieved more than one time are represented by using the same object instance." Further down the reference page for the DataContext we read that In general, a DataContext instance is designed to last for one "unit of work" however your application defines that term. A DataContext is lightweight and is not expensive to create. A typical LINQ to SQL application creates DataContext instances at method scope or as a member of short-lived classes that represent a logical set of related database operations. so the most logical place to create and dispose of our DataContexts is in the methods that implement the business logic. This works perfectly well for retrieving data, and for updates on entities that have no relationships, but fails with a Cannot Attach An Entity that already exists. exception when an update is made to entity relationships. The problem is that Linq to SQL cannot move objects between DataContexts, so if one context was used to lookup the object in question and another was used to lookup one used in a relation (say, to a lookup table), then Linq throws the fit seen here. In a web application, it is much easier to keep this from ever happening, as a single DataContext will likely be used to do the work from a BL call (or, at least, the calls will be sufficiently separate as not to trod on each others' feet). If the context is moved up to the business object layer (i.e. as a static member), the problem is partially alleviated and partially aggravated. It is somewhat alleviated in that all of the objects of a certain type will, at least, have been pulled from a central DataContext and so will have no issues amongst themselves. However, there is still the issue of when an object is set (via databinding) from a list that was pulled by another datacontext. An easy, and genuine example, is where one entity (call it A) has an attribute named "type", which must be one of the entries in a lookup table (which we will call entity B). If a drop down list is databound to the entries in the lookup table are pulled by entity B (the most logical choice) the same error message as above is hit--unless, of course, all of the entities are repulled by entity A's datacontext before saving. A labor-intensive, innefficient, and maintenance heavy process. At any rate, the application could be written this way, but not without a great deal of effort to repull and remerge data with a single context. Finally, one could move the context up to the application layer--the entire application shares a single datacontext. The problem with this is that, in an application where multiple tabs or windows can be open, if any single object attempts to save its changes via SubmitChanges, the pending changes for all windows will get submitted, even if the user comes back and hits "Cancel". The result in this scenario is utter and complete chaos. Ultimately, what we did in this scenario was to create a single DataContext per ViewModel (where we experienced issues with this, not universally) and pass it through all of the data fetching operations. The bookkeeping was certainly a little tedious to write, but it worked. From a conceptual standpoint, this is very dirty as it makes the presentation layer aware, even in a limited sense, of what is being done by the data access layer. While Linq to Sql is very nice, it has some very bad shortcomings when used in GUI applications.
  • One too many Tiers

    Something has been nagging me lately about the three tier architecture--quite simply, it has too many tiers. If you subscribe to the full three tier architecture, you have an application that, at the end of the day, looks like this: Yet, if you are using that architecture, you are almost certainly using it with an object oriented programming language--and if both things are true, there is a problem. It's nature may not be immediately obvious, but it is there nonetheless: this flavor of the n-tier architecture defeats the entire point of object oriented programming. To review, one of the upside of object orientation is that data and the operations performed on it are encapsulated into a single structure. When so-called business rules (operations, really) are split into ancillary classes (the BL classes), encapsulation is broken. In effect, we are using object oriented techniques to implement procedural programming with dumb C-style structs. The true value in the multitiered architecture is actually far simpler than this birthday-cake methodology that has been faithfully copied into so many projects: keep presentation and logic separate. Any good methodology gets this much right (like MVC). In conclusion, the remedy is simple: if you have or are building an application with a multitiered architecture, make your code base cleaner and more intuitive by merging the BO and BL layers.
  • A Short Introduction to MVVM

    Our team is building an application using WPF with the Model-View-ViewModel design pattern and I wanted to take a few minutes to give an introduction to MVVM. The pattern itself is comparable to the venerable MVC pattern, though by no means identical. Let's begin by examining each piece and then looking at how they fit together. Model--the model is very much the same thing as the model in MVC or business objects in a three tiered architecture. It is a straight up model of the data being manipulated without any display logic of any variety. View--the view is, again, very much the same as the view in MVC. It is the formatting or display. ViewModel--if you are familiar with MVC or similar patterns, the ViewModel is the largest departure. There are two ways to look at a ViewModel, which will become clearer after reading through some code. The ViewModel as an adapter between the model and the view. This is, perhaps, the most familiar and comforting way to view it, though it is also the least accurate, as the logic behind a view is also encapsulated in the ViewModel. The ViewModel can be seen as an encapsulation of the logic and state of the view, independent of any display logic. In short, a ViewModel Models a View. On the ViewModel, the second explanation is the best, though I did find #1 helpful when first examining the pattern. MVVM is a fairly new pattern, seeing most (or all?) of its use in some of the newer Microsoft technologies, WPF and Silverlight. As a result, the fit between framework and pattern is often subpar. The easiest example (which does not seem to arise in Silverlight) is that of popping up a dialog in a WPF application. If the ViewModel knows how to pop up a dialog, then we are clearly violating the pattern, as the ViewModel is supposed to model a view's operations and state and leave such details to the view. After all, the whole idea here is that we should be able to bolt multiple views onto a single Model-View pair, especially (and here is where the aims differ a little from MVC, if not in theory, at least in practice), views that cross paradigm. For example, a WPF view and a Silverlight view, allowing the application to exist as both a desktop application and a web-based application. If you do not do something, though, you are unable to perform an elementary task: prompt the user (after some fashion or another) for input. In practice, we are using a mediator to allow the ViewModel to send messages which the View can then receive and act on as its implementation mandates. On one hand, this works well and I like how it falls out in practice. The View and the ViewModel remain separate and mockups or tests could be written that simply interact with the mediator. From a more theoretical standpoint, it makes me uneasy because it is plastering over a severe weakness in the pattern that, perhaps, ought to be addressed at the pattern level instead of at the implementation level. Moreover, what is a mediator, really? It is very much like an ad-hoc event handling system. Would it not be better to simply use events as they were meant to be used? Another thing I noted was causing some people angst, was that the MSDN description of MVVM (see the section entitled "Relaying Command Logic") said that the codebehind for a xaml file should be empty. While I certainly think the idea of the View itself not doing anything, as it were, is a good one, there is sometimes logic that is View-specific and should, therefore, be kept in the view. A better formulation, in my humble opinion, is that there should be only tasks specific to the view itself in the codebehind. For example, if you are writing the basic set of CRUD operations for some object, the act of saving the object will not be view specific. Taking care of some rendering details might be. The optimum case is, of course, that all logic find its way into the ViewModel. Until WPF and MVVM are a better fit, there will still be oddball cases that mandate violating the principal. To wrap up, the most important thing about MVVM is that the ViewModel acts as a model for a view rather than a traffic controller (like the Controller in MVC) so that, in theory, one could bolt entirely different UIs on top of one Model-ViewModel set. In practical terms, MVVM is in its infancy and, consequently, there are still some rough edges that developers should be aware of when writing code.
  • No, I do not want to reboot...

    What is it with Windows and this urgent, burning desire to reboot? Here is how the last couple of weeks on my work PC have gone, as an example. I boot up my computer (which is hard, because every so often, for no discernible reason, the PC hangs on boot) and login. Windows chipperly informs me that it updated everything. Yay! Butterflies and daisies and happiness. I start up my usual army of suspects. Visual Studio. Firefox. Bug tracker. Et cetera, et cetera. Then AVG Professional pops up a message. All perky, it tells me that it has finished update. I need to reboot, how about now? Grrrrrr. I'm just getting down to work and you want to reboot? Heck no. But I can't say no. I can postpone it. For 60 minutes. Fine. 1 hour. Just get the heck out of my face. So, every hour or so I tell AVG it better flipping not reboot my computer. Then the shiny, dolphin blue box pops up. Windows has, like, just finished installing the most totally awesome bunch of updates. How 'bout rebooting now? NO. GO AWAY. I DO NOT WANT TO REBOOT. Well, all right then. We can postpone for four hours. FINE POSTPONE IT FOR 4 HOURS. I JUST WANT TO GET SOME WORK DONE. Then, as it turns out, Flash and Java want to update too.
  • Image Based Approaches for reading PDFs on the Nook

    I've been trying to get my nook to provide a more pleasant experience reading PDFs. In all honesty, I saw PDF capabilities as one of the nook's biggest selling points. I had hoped to take all of the academic papers I was interested in and throw them on the nook, saving on time and paper. I was disappointed to find that the reflowing, which is fine for single column, non-technical material, was a huge pain for a large quantity of the papers I wanted to read. Recently, I discovered that the nook will not reflow a PDF if the text size is set to small. While this is not at all obvious, it is easy enough once you are aware of it. One problem remains: there is no ability to zoom or pan on the nook. So, for a multicolumn paper with generous margins (fairly typical in academic literature), the text becomes either truly unreadable or straining to the eye. As of firmware update 1.5, this has not been addressed. The ironic thing is that these missing features are simply huge. I am sure from reading the nook forums that there are a great many others who are or were excited about the nook because of its ability to read PDFs (something that the Kindle DX 3 is supposed to have, alongside panning and zooming). Moreover, since they are using Adobe Editions on an Android platform, the feature would not have been hard to add. Finally, from a UI perspective, I think all that we really want is for panning to work on the PDFs the way it works on the web browser and for there to be an extra zooming feature. Fortunately, there are some tools to get around these shortcomings, at least in the short term. These all revolve around chopping the PDF a little bit, and most also work by rasterizing the PDF so that the nook's retouching has no effect. briss is an application to crop PDFs. While one could also do this with ImageMagick, briss first analyzes the PDF, clustering the pages into a couple/few layouts, then allowing the user to set the cropping boundaries manually. It is important to note that, of the three listed here, briss is the only one that does not actually rasterize the PDF. papercrop is an application that analyzes a PDF, dividing each page into one or more "crops", which it then puts in order and outputs. Because of the analysis it does on the documents, it is particularly well suited to multicolumn PDFs. It was originally built with academic use in mind, so it works especially well for PDFs that fit that mold: computer generated documents of low to medium complexity. Unfortunately, it does not do so well with scanned documents, such as those that come from the Internet Archive, because it treats the speckling from the scans or from dirt on the page as being legitimate parts of the document, reflowing its crops around them. pdfread was one of the first applications developed for the purpose of making PDFs readable on dedicated ebook readers. It rasterizes the content, then breaks it down into image chunks that fit well onto the ereaders screen. It does not support the nook as such, but the Sony Reader PRS-500 profile works perfectly on the nook, since the two devices have the same screen resolution. In my experience, for whatever that is worth, briss is the best option with single column materials with wide margins. Simply cut the whole PDF down and the nook display is just fine. I use papercrop for anything in which layout design is important. I do not use pdfread that often, to be honest with you, but it is still handy to have around for the odd ball document. In the final analysis, this toolkit has made a large number of documents readable on my nook, including such fine titles as the Unix Haters Handbook and Paul  Graham's tome, On Lisp, but in a perfect world (one with panning and zooming on the nook) it would seldom, if ever be necessary.
  • Symbols vs. Keywords in Common Lisp

    I was resuming work on my Sheepshead game today (more will be coming in time on this), and it occurred to me: what is the difference between a symbol and a keyword? If you type (symbolp 'foo) and (symbolp :foo) both return T but, if you type (eq 'foo 'foo) (eq :foo :foo) both return T yet (eq :foo 'foo) returns NIL Finally, if you type (symbol-name 'foo) and (symbol-name :foo) both return "FOO" So, what gives? Both are symbols and symbols with the same print name, at that. The difference, is that keywords are all generated in the KEYWORD package, but symbols identified with the QUOTE operator are generated in the current package. So, * (symbol-package 'foo)                                                                    # but * (symbol-package :foo)                                                                    # Just a quick little tidbit.
  • TFS Frustrations

    The other day, I encountered a strange error while trying to unshelve some work in TFS. For those of you who may not be familiar with it, TFS is one of the more server-oriented source control systems around. To keep work that you want, but that is not ready for checkin, you shelve it--which amounts to a semi-private checkin that is not on the main branch. Viewers can see and share one anothers' shelves. In this case, I had started some major changes that would not be working for a demo, but some other changes were needed for the demo. The new changes included adding a variety of files, besides many modifications. So, I shelved it. A few weeks later, I tried to unshelve it. I assumed that I would just have to merge the changes together (not a big deal, in this case). Instead, TFS complained that all of the new files still existed in the folder. The first TFS frustration. By default, when files are deleted from the project and from the source repository, TFS leaves them on disk. This would be sensible enough, if they at least handled it correctly in situations like this. So, I decided to move all of the files to another location (instead of deleting them--time has made me paranoid about things like this). I reran the command. TFS still insisted that the files existed. Frustration #2. The files simply did not exist, yet TFS insisted that they did. The long and short of it is that I had to take the files that TFS left on disk and manually readd and merge them into the project, as TFS simply would not allow the work to be unshelved. It turns out that there is a known bug in TFS shelves, that occurs when files are added, then shelved, and then unshelved again. As a bug, this is so severe I don't know how TFS ever got released in this state, especially since using shelving in this manner is precisely the sort of thing that Microsoft recommends.
  • ZenCart--how NOT to do Upgrades

    I am currently upgrading ZenCart. Why and where are not important. Suffice it to say, the more time I've spent with ZenCart, the more I realize that, open source or not, the project manages to do everything wrong. It all started when I looked at the upgrade instructions. We were upgrading from version 1.3.8 to 1.3.9h. The essence of the instructions is to put a copy of your current install (with template modifications and all) in one directory, an unmodified version of your original install in another, and a fresh install of the new version in a third. Then, you do a diff of the installed version versus the unmodified version of the same version and manually copy your changes into the new directory. Finally, you run the automated database upgrade. That is way to much work, especially when you consider the fact that those instructions are what you do for minor upgrades. The process should be very simple. Backup the current setup, unpack the new files, and run the database upgrade script. A large part of the reason this is the fact that Zen Cart also does templating wrong. Rather than stashing all of the files somewhere simple (/includes/templates/TEMPLATE, using their organization scheme), they are scattered across the entire install in the form of little overrides. Keeping track of the changes made to an install is unpleasant to begin with (source control helps, but it does not make it at all clear which files of the overly-many .php files are original and which are modifications). When you also add the horrific security bugs that existed in the 1.3.8 line, you get an ecommerce system that I would definitely advise against using.
  • Vim-like extensions for Visual Studio 2010

    Now that I have written about configuring Vim to make it interact better with Visual Studio, I want to take a moment and look at some extensions that seek to put Vim in Visual Studio. The first, probably the oldest, is ViEmu. It was the first thing I came across in my quest to use Vim's fluid editing for .NET development. At $99, it isn't exactly dirt cheap, but I would happily have bought a license to have Vim in Visual Studio. As an added bonus, the $99 version also integrates into SQL Server Management Studio. So, I downloaded the trial beta for VS 2010 and installed it. The addition made Visual Studio so unstable, the results were astounding. Multiple crashes, out of the blue, with no rhyme or reason--except that when I removed ViEmu, it all stopped. Apparently, I'm not the only one. If this were an open source project, I would have been sorely tempted to dig in and see what the problem was, but it isn't. I simply won't buy software that makes my development life miserable. Later, I stumbled across VsVim, an open source project with very similar aims to ViEmu. So far, this has proven to be very, very nice. It detects conflicts between its keybindings and Visual Studio. The biggest signs of its overall youth (VS 2010 is the only version it supports, or ever has), is that there are many motions that are not fully implemented. For example, if you move the cursor over to an opening parentheses or brace in command mode, and type 'd%' without the quotes, you get an invalid motion error. In Vim proper, it deletes the parentheses and everything between them. Looking at the project activity on GitHub, it looks like there is a good deal of activity, which is always a plus on these kinds of projects. The only real oddity is that the author cannot accept source contributions. I guess if anyone wants to make significant changes, they will have to fork it. Overall, I am very happy with VsVim and am using it day to day. I still use gvim alongside it for those cases that I want to get full vim happiness (and, especially, when I am in familiar enough territory that I don't need or want IntelliSense.
  • On PowerShell--Or, how Microsoft does not really "get" CLI

    I have been splitting time between PowerShell & bash at work and, so, I have been able to get a little more acquainted with it, which is nice since I have been wanting to ever since my professor (a die-hard bash user) mentioned that Microsoft had just come out with a new shell that was, in some ways, more advanced than bash. There are some niceties in PowerShell and, in truth, it can mostly be summarized as being Bash.NET or, more likely, Bash#. There are, however, some warts. They are the kind of warts that make one thing abundantly clear: PowerShell was designed by theorists, not everyday users. What I mean is this: some of the usability issues (yes, believe it or not, there is such a thing as usability on the CLI) are so glaringly obvious that the only explanation for them is that the designers were theorizing as to what someone who used a shell would want, rather than how they themselves would use one. Let's run through a few examples. Execution of script files is off by default. For anyone who has used any of the Unix shells, this almost incomprehensible. One off scripts, far too long to be typed command by command in a running session, but short enough to be dashed off in minutes, are the order of the day. The idea is that, by default, you cannot actually run PowerShell scripts is just astounding. To get scripts to run, you must either launch powershell.exe with a switch modifying the policy for that particular session (i.e. something like powershell.exe -ExecutionPolicy Unrestricted ) or by using the Set-ExecutionPolicy commandlet. The latter, however, modifies the registry and so requires a reboot. Next, we have the common house-keeping task of setting permissions. Sysadmins do it all the time. In PowerShell, the process to do this mundane task is absolutely daunting. (Note: there is a DOS Command attrib that will fulfill a similar function with much less headache. However, we are trying to judge PowerShell on its own merits, not on the fact that another command happens to be installed on the system.) In order to actually change file permissions, you must first get an ACL object for the file system object in question, then modify it and set the ACL. The example at the link is fairly innocuous looking, but it is far more work than chmod & chown, and only gets worse as you want to do something nontrivial. You cannot zip or unzip directly from the command line. If you run & .\foo.zip you will get the Windows zip wizard to come up, the same as if you had invoked any other file that way, but there is no equivalent to: unzip foo.zip that will just unzip the file, no questions asked. I have had it suggested, that the issue is one of licensing (namely, Microsoft's licensing agreement for zip technologies does not permit them to create a commandlet with this functionality). This is certainly possible, but I, as a user do not really care. Of course, this being PowerShell, you could also write (as some already have) a commandlet that uses an external DLL like sharpzip to handle it. That is all well and good, but it would still mean that I have to manually copy PowerShell commandlets and DLLs to customers' systems--something that is not usually possible. Perhaps the most touted feature of PowerShell is the ability to dynamically load assemblies (DLLs) through reflection and expose the objects to the shell, making it especially useful in a standard BL-BO-DAL architecture when you have some setup tasks, to load your assembly and perform deployment tasks. On paper anyway. Unless--yes!--unless something goes wrong (we all knew it would go wrong, otherwise it would not have found its way into this post). Like the fact that the latest version of PowerShell, v. 2.0, cannot load .NET 4.0 DLLs. I guess that isn't really fair. According to Microsoft it can't. If you jimmy a couple of registry settings and pray that nothing bad befall you, it might work. Ultimately, flaws like this are a natural offshoot of Microsoft's traditionally anti-CLI culture. Since they almost single-handedly drive the philosophy of the ecosystem, the result is that few true Microsofties (as opposed to people who just happen to use Windows) understand CLI, so, when customers demand it (for systems administration a good shell installed by default is simply essential--we've been stuck with cmd for too long), there is no one who truly understands what they are supposed to be building. Hopefully, Microsoft has been made aware of shortcomings like this and we can expect to see PowerShell refined into a truly pleasant shell. That has, after all, been Microsoft's forte for years. Improving software to be what it should have been all along.
  • Microblogging & Programming

    Microblogging, especially through Twitter, but also through its cousin, Facebook statuses, has become the thing of late. I have little doubt that, like most things that are "the thing", its popularity will fade into the landfill of fads. In one sense, I have never truly "gotten" microblogging. To be sure, I understand the idea of short (140 characters, if you are a Twitterer) messages--and I have always found them to be a sign of a declining societal intellect. Once, our forefathers, in the 18th century, conducted flamewars in large, thick volumes (if any one doubts me, read up on Alexander Pope and the rivalries that spawned the delightful Dunciad). Now, we discuss grave matters in only 140 characters. But lately, I have been wondering if a development team might not be the ideal place to put microblogging to good use. Most teams have neither the time nor the inclination to write and maintain copious notes on design and implementation, but they do have running dialog. Shared knowledge makes the discussions short, for the most part, and the decisions and information passed on are so short as to almost be not worth the effort. Wikis are a step in the right direction, but these are far too much like the longer documents that no one wishes to maintain. The Twitter model of lots of short little notes might actually be a good fit to the stream of consciousness that pervades every development team. Architecture discussions could be left on a private microblogging platform of sorts. A private set up also allows all of the notes to be made semipublic by default, so we avoid the problem of emails where things can (in larger organizations than the one I am in) get caught up into a he-said she-said that could only be cleared up by a sysadmin. The use of Twitter-style @ and # notation would make it easier to cross-reference development notes. This is actually the same advantage it has over IRC, the traditional hacker standby. Since ever thing is public or semipublic by default, no one has to remember to log the conversation or post the log--or rile up feathers because a log was kept at all. The largest irony of these musings, is that I know full well that, in one sense, the only purpose I have found for microblogging is in flagrant violation of the model put forth by the site that made it popular, Twitter. On Twitter, everything is public. If, hypothetically, my wife and I were to twitter notes back and forth about family matters (e.g. can you pick up some milk on the way home?), it is public. There is nothing wrong with that, but it is superfluous fluff to just about everyone on the world wide web. Incidentally, this is why, if you really want to do something like that, set up a private instance of microblogging software. The only valid use your present whereabouts can be to the general public, is as an invitation to be stalked. But, back on topic, I think that is what is wrong with the microblogging model in the first place. The vast majority of what I have to say in There are exceptions, of course. Some musicians I like use it extensively for tour announcements and to push each others' stuff. It makes perfect sense. There will be a lot of tour announcements that I, as a fan, am interested in that are less than 140 characters. Where are you appearing? When? (For smaller groups, this information changes a lot, and quickly). Oh, that new album is out? Most of us are not those exceptions, but I think that when you put some constraints on topic matter and audience, there is definite potential. What I would be the most curious to see, would be an open source project that relies primarily on Twitter or Identi.ca for dev discussions, instead of IRC or email. That would, I think, be the ultimate test of the merit of the idea. Finally, a little googling made me painfully aware that I am not the only one to have such thoughts. I even saw a few academic papers on the subject, though I have not yet had time to read through them. I think a little survey of the literature on this blog might very well be forthcoming...
  • Links & Notes on Using Vim for .NET Development

    My new job is writing in C#/.NET. Overall, I like this quite a bit. C# has some nice features over Java (I went to a Java school, so I do speak from experience) and the .NET framework is quite nice. I have, though, been having to optimize Vim for .NET development and wanted to share some handy links. http://kevin-berridge.blogspot.com/2008/09/visual-studio-development.html -- a very nice series on setting up Vim with C#. http://arun.wordpress.com/2009/04/10/c-and-vim/ -- Some excellent suggestions here, particularly in making ctags a little more automatic. http://stackoverflow.com/questions/1747091/how-do-you-use-vims-quickfix-feature -- some questions about vim's quickfix feature. http://vimdoc.sourceforge.net/htmldoc/quickfix.html#quickfix-window -- documentation on using vim's quickfix feature. http://www.vim.org/scripts/script.php?script_id=356 -- dbext makes life so much easier. At least the first link there suggests the use of NERDTree. While I have used NERDTree in the past, it was definitely more useful with Visual Studio, because the hierarchies ran deeper than is, I think, typical in other projects without it. ctags is wonderful for cross-referencing code within the project itself. The suggestions at link #2 were particularly helpful in getting things set up so that updates to the tags file would happen automatically in a timely manner. Some of you may ask, why not use ViEmu? The answer is: I tried. I tried hard and I wanted to like it. It is less set up time and less hassle to have Vim in Visual Studio than it is to build just the right amount of bridging between Vim and Visual Studio. The problem I hit was that ViEmu crashed Visual Studio 2010. Often. Badly. Irritatingly. The stability hit was just too much. Team Foundation Server is another big one. I had to tinker with it a little bit, but link #1 provides some good pointers on getting this set up. Finally, here are three files: dotnetvimrc.nw _vimrc _gvimrc The first file is a literate explanation of the configuration files. The second two are the tangled output. If you want a more thorough explanation of what is going on, consult the first file before moving on. Anyone with comments or questions, feel free to post them here. Some things I am still looking for in my .NET vim config: Better designer integration support. Visual Studio's ASPX designer generates code (the *.designer.cs files) based on events that happen in the IDE--not as part of the build process. This means that making wide ranging changes outside the IDE causes compilation errors that can only be fixed by opening the .aspx file and making a change or two (I tend to cut the whole file and paste it back into itself), then rebuilding. A communication bridge between Vim and the debugger would be nice. Similarly, it would be nice to launch the embedded IIS server from within Vim.
  • noweb.vim Source is on Gitorious

    As I promised before, I just posted the literate source to noweb.vim. The code can be had at: http://gitorious.org/nwvim The source, as is fitting, is literate and written in an unusually informal style. Just goes to show that literate programming does not have to be long, in depth material on algorithms.
  • Is the brain a computer?

    I was listening to Donald Knuth give an Author's Talk over at Google and he did it in soft of a Q&A setup. One of the questions was, I think, particularly evocative: "is the brain really a computer and if so, what are the theological implications?" When I heard that, I put the video on hold to get my thoughts down. Here is the answer, as I would have given it (and I'll listen to Knuth's answer in a second): "If you want to think of the human brain as a computer, I suppose it is as good an analogy as any. If it is, it would be a biological computer unlike any in existence. But I do not see this posing any problems, theologically. One of the most important ideas in Christianity is that a human being has both a body and a soul. It is interesting. C.S. Lewis once said that you do not have a soul. You are a soul. You have a body. This seems particularly pertinent, because if you want to say that the human body is equipped with a massively powerful computer, fine. It is still under the control of the soul which truly is the essence of the human being. Finally, Faith is interested in the soul. The fact that your soul is operating a computer does not change its overall status and does not pose any difficulties to the Christian faith--it does not even change anything, any more than determining the composition of the rest of the body has." That's pretty close to my knee jerk reaction. Naturally, I have had a few (five or so) minutes to refine it so it probably would not have come out so well had I said it, but the content would have been more or less the same.
  • Literate Programming for NAnt

    I just pushed some code up to Gitorious: http://gitorious.org/nant-lp Sticking with my recent interest in literate programming, the project is a simple dll (nant.lp.dll) that implements notangle and noweave tasks for NAnt. The DLL itself is available from the link (no need to compile!) if you prefer. I will attempt to keep this up to date as I tweak the plugin. A full example can be seen in the .build file at the source. However, to get the general sense across, I'll quote the documentation section of the woven source: Usage So far, we have concerned ourselves entirely with building these tasks and have not given any examples of their use. A complete example is the nant-lp.build file used to build this project. However, we will provide a brief overview of both tasks. Tangling This tag: runs the equivalent of this command: notangle -RNoweave.cs nant-lp.nw | cpif Noweave.cs Weaving Similarly to the previous section: runs the equivalent of this command: noweave nant-lp.nw -index -asciidoc > nant-lp.txt No real surprises here. If anyone has any questions or comments, feel free to post them here.
  • Regarding ORMs

    I cannot say that I have extensive experience with ORMs. After a recent bout or two with them, I found that there was something about them that nagged at me. Tonight, I realized what that something was. ORMs exist as a middle layer between an object oriented programming language and an entity-relational database. ORM advocates refer to what they do as solving the issue of "impedance mismatch"[1]. Herein lies the rub. ORM advocates are not solving the problem wrong, but they are solving the wrong problem altogether. They have, correctly, noted that an ER database is not an object store. However, they write the ORM as a large hack so that one can treat an ER database as an object store. They would do far better to write or use an object store--something actually designed to have a 1-1 correspondence between a "record" and an instance of a class. There has been quite a stir lately with the rising of "NoSQL" solutions. These are, I think, little more than the old object-oriented databases coming back from the dead. While most are key-value stores and there fore not strictly speaking object-oriented, it is not hard to realize that most adapt themselves more readily to use as object stores. I think that this is where the future is heading for ORMs. In due time, the developers who today write and use ORMs will move their work over to document oriented NoSQL like databases, as these are faster (I would argue inherently, since you are not wasting time copying excess data back and forth and all about) and easier still than the ORM layer itself. This leads to the obvious question: will a convergence of the NoSQL camp and ORMs result in the death of relational databases? I do not think so. ER databases are not bad because they are not object stores. They have their own advantages and I think they will be with us far into the future. The model has not survived forty years on account of having a poor foundation. ORMs, however, I think we can do without. Rather than solving an impedance mismatch, they are a crutch to use the wrong tool for the wrong job. References http://www.agiledata.org/essays/impedanceMismatch.html
  • The eReader

    For a belated birthday present, my lovely wife decided to get me a nook. For the past couple of weeks I have been geeking out with my new toy. I must say that I love my nook. I have only two real complaints: You cannot download ebooks from the browser. Downloading books through the store works to perfection, but when the onboard web browser is pointed at a supported format (an epub from Gutenberg, for example, or a PDF from the Internet Archive), it chokes, saying that "Downloads are not supported in this release." I can understand Barnes & Noble's reluctance to allow arbitrary downloads on a specialist device, but come on. I can't download ebooks? As an aside, it would not really surprise me if this were a tactical decision to try and get you to buy books (in this case, books that don't exist) from the store. PDF reader oddities. I cannot say how nice it is to have PDF capabilities in the first place. I download a good deal of academic papers and this makes it a lot more convenient to drop them on the reader and read them that way as opposed to soldiering through the read on a PC or printing them off. The reflow works--kind of. The problem is that words will break down to the next line mid word. There is no attempt to hyphenate properly at all. It just breaks the line. Moreover, the next lines are not joined. So, for the most part, you get a typical line oddly broken, followed by a short line. Annoying, but not unusable. Finally, in the PDF department, certain symbols do not seem to render well. I noticed this while reading a paper by Claude Shannon. A pedestrian formula (f(x) = x) came out fx = x. This got more confusing when multivariable functions were used. The touch screen is a tad less sensitive than I would have liked. Quite usable, but this still causes some annoyances. The big thing that this has led me to try is Calibre--an open source ebook library manager, converter, and viewer. It has been pretty nifty. One of the most awesome features is the ability to provide it with an RSS feed and have it create an ebook (ePub in my case, of course). The results are beautiful. For the blogs I read that have longer articles (or more content), I simply grab the URLs from my feed reader and drop them in Calibre. With a new baby up and about, this is wonderful, as I can read while trying to walk the little munchkin to sleep. I really think that we are seeing the beginnings of a revolution with these eInk readers that have been coming out. Unlike some of the more enthusiastic readers who have taken to them, I do not think that they will displace print entirely. They will, however, displace casual printing. Paperbacks will go electronic. If newspapers and magazines survive the internet age, they too will go eink. Enthusiastic readers will always, I think, want their favorites bound, printed, and lovingly nestled on a bookshelf. The ereader is not a fad--but I do think it is transient as a specialist device. The biggest reason to use an ebook reader is the eink display. Mind you, ebooks have existed for years. A dedicated reader is not a prerequisite--but it does noticeably enhance the experience. This is why people want them, as opposed to reading on a smart phone or a tablet. For the cost of the nook, I could have gotten a fancy smart phone (my carrier is offering the droid for $199.99 with a 2-year plan). But the displays are not nearly so nice. We cannot merge them with tablets yet, either. At present, the refresh rate on eink is simply too slow for general purpose computing. The lag does not seem bad compared to turning a page. It does seem bad compared to a modern monitor's refresh. Also, I have yet to see a color eink in the wild (though I recall reading that they are coming). This is the merge point. When eink becomes colorized with a sufficient refresh rate, ereaders will merge into the touch tablet market. It only makes sense. Why have two separate devices when one can be manufactured that will do both equally well? Ah, well. Such musings are the last you will probably hear from me in a while. My gadget money for the next little bit has most definitely been spent.
  • Casual Literate Programming

    In trying to really "get" literate programming, I have been using it for a number of my smaller, almost toy, projects. Little scripts, utilities, that sort of thing. Those chunks of code that programmers and computer enthusiasts write, not because they must or for a paycheck, not even to write that big application that's missing from their toolboxes--but just to make those stupid little problems in life go away. The results have been encouraging. Overall, it does not seem to take me much, if any, longer to write the little script. Usually, the information is nothing new--it is the requirements and research that I would have had to do anyway to build the tool in the first place. At least with this method, that information is not lost as it is stored right alongside the script itself. It is true that this does not produce the large volumes that literate programming is semi-famous for, including Knuth's own Literate Programming, but it does fulfill the fundamental tenet of literate programming, namely that programs should be written for other human beings. This is particularly pertinent in scripting, because, it seems, that scripts are particularly prone to the read-only syndrome. I noticed this myself just recently when I was cleaning out my ~/scripts directory (superseded by ~/src and ~/bin--one of the reasons for the spring cleaning). Does anyone else have any thoughts on literate programming for little one-offs?
  • Asciidoc backend for Noweb

    I've been toying with using noweb for some miscellaneous coding projects of late. I personally prefer Asciidoc over other formats supported as backends for noweb, including LaTeX and HTML, so I wrote a backend for it. A copy of my patchset can be found at: http://www.mad-computer-scientist.com/files/toasciidoc.patch The archive contains patches against the 2.11b release (current, as of this writing). Drop me a line if you have any questions or comments.
  • Compiling a KDE 3.5 app on Ubuntu 10.04

    I was trying to compile an old KDE 3.5 application this weekend because no port to KDE 4 has yet been made. Judging by the activity on the site, it is doubtful that it ever will be. For the curious, I was trying to get SchafKopf [1] up and running. I am running Ubuntu 10.04 (Lucid) which has long since ditched the KDE 3.x line for the shiny 4.x line. While some of the KDE 3 libraries still ship, some key ones for this particular application were missing. Namely, libkdegames. The solution turned out to be the KDE/Trinity project [2]. This project attempts to continue development and maintenance on the 3.x line. As a sidenote, this is what is so beautiful about open source software. The vendor threw out a line that was beloved of some and replaced it with something they did not like so well. Rather than being stuck, they can maintain the software themselves. Back to the problem. All that I needed to do was install the libkdegames-kde3-dev package and run: % ./configure --without-arts --includedir=/opt/kde3/include/kde % make % sudo make install and everything was cool. --without-arts may or may not have been necessary after installing the trinity libraries. It was earlier. /opt/kde3/include/kde is where the includes were placed, instead of the standard path. Have fun. 1. http://schafkopf.berlios.de/ 2. http://trinity.pearsoncomputing.net/
  • Of Grants and Taxes

    Academic literature is expensive. Now, a great many academians post their work for free, on their websites for their own reasons. In most cases, it is because publication is largely a publicity act. Others want to see their work spread. Others think information should be free. Whatever their reasons, some do. Some don't. They largely do not see the point. Whatever the acts of the individuals, academic literature is expensive in its published form. If you go to ScienceDirect, the price for a single article or book is steep--over $30. You have to dig for the prices too: http://www.info.sciencedirect.com/buying/individual_article_purchase_options/ppv/ Journal prices are similarly expensive. There is just one catch: almost all academic research is funded by government grants. Why, as a tax payer, should I have to pay over $30 to access a paper that I paid for in the first place? Private researchers and private journals have every right to control the prices of their wares--but the academic world has no business charging for anything more than bandwidth (pennies per download) to an American citizen when working on a United States Federal grant. Of course, none of this would even be a theoretical concern if we respected the Constitution's limitations and left the funding of research "to the states and to the People."
  • ebooking

    Over the weekend, I tried to read PG's Paradise Lost etext, in mobi format, on my work Blackberry. I find something interesting and try to add an annotation--a note right? I read a book and I can take notes. Sounds simple enough. The field for the note is ridiculously short, to the point of being almost worthless. Let me get this straight: I can't take arbitrary length notes while reading a mobi? WHAT IN THE HECK?
  • Living with Info

    Since my previous rant on Info files, I have had yet another run in with GNU's documentation system. This has brought a couple of things to the forefront. First, that I have some more random ramblings on the subject of GNU. Secondly, that this time I couldn't take it, and decided to find some things that would make life livable with Info. In my googlings, I found a discussion of Info in which the Info-lovers seemed to be at a genuine loss to understand why the rest of us despise Info with a passion. Almost without fail, you find that the Info-apologist, at some point, says that they love being able to navigate Info from Emacs. There is Info's sweet spot. If you are one of those poor souls who lives in Emacs (I look up man and info alike from the commandline), Info will seem pretty sweet. Bindings similar to the main editor (less bends more towards the vi side of things), and the documentation browser embedded in the editor. Not half bad. They also maintain that the hyperlinks are an awesome part of it. As if text browsers don't exist or haven't been fully integrated into Emacs (everything is in Emacs, except a good text editor). Fortuantely, there is hope. There are ways to avoid interacting with info proper. The first option, on Server Fault, is to convert them to plain text with info itself. Just run: info --subnodes --output=output.txt infopage And all of the nodes on the info page will be dumped into the given output file. This can then be viewed with the pager of your choice. Another option, posited by the denizens of reddit is an application named pinfo. It is a nice little info page browser with lynx/vi/less like bindings. I have tried both, but I tend to find myself dumping Info files out to text more often and viewing them with less. It is much more manly.
  • Noweb & Vim

    I just posted my first vim script, a syntax file, to vim.org: http://vim.sourceforge.net/scripts/script.php?script_id=3038 It is a little mode, of sorts, for working with Noweb files in vim. Basically, it uses one syntax for the doc chunks, another for code chunks, and autofolds the code chunks (off by default). Folding just feels natural with code chunks. I am using it now with one of my tinkering projects and, despite its minimal size, a mere 35 lines, it is really nice. Of course, it would be a little odd to write some code to facilitate literal programming in a nonliteral style. I will be posting the full literate version here, shortly.
  • et tu, WordPress?

    I've been playing with some code to handle WordPress exports (I'm planning to consolidate and retool this site--I don't like the schizophrenic 2 sites within a site mentality that it has right now) and one thing is clear: WordPress has some issues. A nice platform, by and large, but the export, running the latest stable version, produces invalid XML. The database coalition is UTF-8 and there are characters in the dump that are valid UTF-8, but invalid XML. Moreover, the URLs are not properly escaped, so the anchors in URLs make the parser throw invalid charref errors. Most of the offending posts are, of course, spam from before I got some good captcha software running (thanks, Zach). These are duly marked as such in the markup and would, of course, have been excluded from any of the later processing--except that I am having to spend time hacking around the broken markup just to get to that point. Oh, well. Such is life.
  • When You Need the Bleeding Edge

    For most applications that I use on a day-to-day basis, I am quite happy with the current version in my distro de jeur's (Ubuntu, of late) repositories. Sure, a little more cutting edge would be nice, but good enough is good enough. I had technical writing professor once who bemoaned the fact that most people, students, professors, and professionals alike, only know about 10% of what their word processor could do. His facts were right. In non-technical fields, most people are probably only aware of 1% of what Microsoft Word could do. The same thing is true with command line apps. My most typical use of find is probably find . -name 'foo' Find has tons of options, but this one is the one I use the most. So, it is true that the versions in a given distro are not the bleeding edge, but, normally, I don't need the bleeding edge--and I don't have the time to care intimately about everything (I'm looking at you Gentoo--you're a lot of fun if I've got a lot of time, but I don't). But there are a few applications that I use where it pays to be, or at least sit closer to, the bleeding edge. For me, those applications are: sup--simply the greatest CLI mail program in human history. The best of mutt, pine, and Gmail in one easy to use application. I started using it version 0.8 or 0.9 (I forget which) and am using the current release, 0.11. tmux--this upstart to GNU Screen offers lots of goodies to the discerning user. The first and most obvious is the ability to slice either horizontally or vertically, a much friendlier configuration file (I don't use standard bindings on either GNU screen or tmux), UTF-8 support, and plenty of little technical details (true client/server being not the least of these). The commonality is that, in both cases, we are talking about relatively young applications and projects that offer a nice set of changes to existing software. The features are the main mover, here. I would not go through the hassle of manually upgrading (through a manually-installed gem system for sup or from manually downloaded sources for tmux) and maintaining either of these applications if their corresponding old and stable projects had what I wanted. In the case of tmux, the biggest thing was horizontal and vertical splitting. The only way to get this in GNU screen is to download an unofficial patch and keep it up yourself. In my opinion, this is even more obnoxious than keeping a separate app, because GNU won't breakdown and just add support. What applications, if any, do you run on the bleeding edge? Or do you think the bleeding edge is a complete waste of time?
  • Firefox Stealing Focus

    Recently, I switched from using Sage RSS feed reader within Firefox, to using Newsbeuter without. I've been loving Vimperator recently, but Sage was making me break away from the keyboard to choose links. Newsbeuter, on the other hand, runs comfortably in a terminal, allowing me to open links from the keyboard. There was just one problem: whenever I opened a link from Newsbeuter, Firefox would steal the focus. This was unacceptable. All I wanted to do was page through my fresh feeds and pull open a bunch of interesting stuff. Then go read it. No flipping back and forth. Just go through the day's bounty. A little googling turned up this thread: http://ubuntuforums.org/showthread.php?t=783263 Long story short: Go to about:config Change browser.tabs.loadDivertedInBackground to true A more interesting question is why do the Firefox devs believe that browser.tabs.loadDivertedInBackground should default to false? All devs everywhere: stealing focus is wrong. Yes, wrong. Intensely annoying and just plain wrong.
  • Pattern

    The use of design patterns means that your language just isn't cutting it for you.
  • More Thoughts on CMSes

    I have written previously on some of the CMS that I have used. What I have yet to see in any of them, be it open source or closed, written in any language that you care to name, is the CMS that I want. This rant is my view of what a CMS should be. All websites have a simple structure, whether we are talking about a chaotic mega-entity like Wikipedia, or a little personal home page. I should be able to lay out, in arbitrary depth, my website. Moreover, I should be able to do this in a declarative way. No hacking. Just, let me arrange my data. Websites will remain largely hierarchical (show me one that isn't) just by the nature of things. Our file systems are hierarchical. You enter a website at a predefined point (though Google and bookmarking allow us to jump to more random places in the tree) and move on from there. This is not going to change any time soon. Do not make me use a JavaScript WYSYWIG. I appreciate that they exist and I have used them. But I would rather compose my content as text files, thank you very much. Markdown, Asciidoc and company are far superior composition formats. A website may fill any number of functions. Blog. Collection of pages. Wiki. These can change at a moment's notice. I want to be able to compose these functions as I see fit. Drupal, Joomla, & friends are miserable in this regard. Their ecosystems have ecosystems. Rather than building small and beautiful, allowing the user to compose the results, we get tangled jungles within jungles. Templating should be simple. I'll lay it out, you fill it out. Most CMSes make far too much distinction between different kinds of blocks and content. I don't want to care. At all. All of this should be malleable at a moments notice. A SOAP interface for handling that text-based content would be nice. In short, it just shouldn't be this convoluted.
  • jqGrid Frustrations

    I just got a jqGrid (not my first, I might add) put together that took way too much time. While I am going to go back and add some features now, I was struggling to get the blasted thing to do the easiest task in the world: show some data. That's it. No paging. No searching. Nothing fancy, just show the data. $(document).ready(function() { $('#grid').jqGrid({ url: 'my_callback.php', datatype: 'json', colNames: ['col0', 'col1', 'col2', 'col3', 'col4', 'col5'], colModel: [ {name: 'id', index: 'id', width: 50}, {name: 'foo', index: 'foo', width: 200}, {name: 'baz', index: 'baz', width: 100}, {name: 'quuz', index: 'quuz', width: 150}, {name: 'bar', index: 'bar', width:150}, {name: 'schnoodle', index: 'schnoodle', width: 150} ], mtype: 'GET', rowNum: 20,, viewrecords: true, imgpath: '../javascript/themes/basic/images', caption: 'My Caption', }); }); Where, of course, a few of those items have been anonymized. As vanilla a set up as you could ask for. I agonized over the JSON data, making sure it matched the default JSON reader (which, I might add, I used last time I used this component). Finally, trial, error, and Google prevailed. This is the code that worked: $(document).ready(function() { $('#grid').jqGrid({ url: 'my_callback.php', datatype: 'json', dataType: 'json', colNames: ['col0', 'col1', 'col2', 'col3', 'col4', 'col5'], colModel: [ {name: 'id', index: 'id', width: 50}, {name: 'foo', index: 'foo', width: 200}, {name: 'baz', index: 'baz', width: 100}, {name: 'quuz', index: 'quuz', width: 150}, {name: 'bar', index: 'bar', width:150}, {name: 'schnoodle', index: 'schnoodle', width: 150} ], mtype: 'GET', rowNum: 20,, viewrecords: true, imgpath: '../javascript/themes/basic/images', caption: 'My Caption', }); }); See the difference? That's right. There is a datatype entry and a dataType entry. Mind you, when I took either of these away, the grid broke in some way. Leave them both, it is fine. A quick grep of the code shows that, predictably, after this angst, parts of the code use datatype and other parts use dataType. JavaScript is case sensitive, so it matters. Now, the version of jqGrid in the code base is not the newest, but at v. 3.2.4 it should have been pretty stable. No doubt an upgrade would fix this (all right, after this I do have some doubt), but I am ticked. How could no one have noticed this after 3 versions? It's not like I downloaded version 0.01. Or was running an SVN bleeding edge edition. It failed on an exceptionally simple example. jqGrid is a nice component, over all, but far, far too brittle. PS - the link that put me on the right trail is here: http://stackoverflow.com/questions/259435/jqgrid-with-json-data-renders-table-as-empty.
  • Setting up Mendelson AS2 HOWTO

    Overview AS2 is a wire protocol for transferring files between two organizations. This guide explains how to get the mendelson open source as2 server up and running. The instructions are slanted towards set up on a Debian box, though set up on any *NIX system or Windows should be very similar. The goal in this guide will be to set up two independant machines, one for test and one for production, and get them talking to one another. Finally, this guide was developed and tested with mendelson AS2 version 1.1 and Debian 5.0. Installation Get the zip file from the Mendelson AS2 sourceforge page. No installation is needed (though an installer is provided for Microsoft Windows boxen), just unpack the files in some location. What I did, was: create a separate user to run the AS2 software (named, cleverly enough, as2user) Unpack the software in a directory in the as2user's home directory. Run the software in a GNU Screen session For the GUI portion, run a lightweight window manager (IceWM was my choice) and a VNC server. As always, your mileage may vary. Out of the box, the mendelson as2 server is configured to interact with the mendelson test server, and nothing else. The next step is to set up the keys. Configuring Keys This process can be done with Portecle, keyman, or the OpenSSL toolchain. The most user-friendly of these is Portecle, which is also the one that Mendelson recommends. Because Portecle is pretty straightforward (and, if you choose one of the other tools, you almost certainly know what you are doing anyway), we will skip the exact sequence of clicks or commands needed for this. Mendelson AS2 stores its keys in the certificates.p12 file in the root of the install directory. The password for this store is, incidentally, test. The first thing we need to do is recreate the private keys. What we do is delete the keys (or delete the store and create it afresh with new keys with this name) and create new ones. The names of the keys are Key1 and Key2. After creating the private keys on both machines, we need to export certificates for each, then exchange them. If mendelson AS2 is running, the certificates and keys can always be reloaded by clicking File → reload key store. Patching the Scripts Mendelson AS2 comes with DOS Batch files and Bash scripts to launch the AS2 server on Windows and *nix machines, respectively. Not to be degrading or anything, but the bash scripts do not appear to have been well tested. I had to make the following changes to them: Both the mendelson_as2_start.sh and mendelson_as2_stop.sh files used Windows line endings, instead of UNIX. The dos2unix script (available in the Debian and Ubuntu package managers) fixed this problem. Make both of the aforementioned files executable At the top of the mendelson_as2_start.sh file, there is a line setting the CLASSPATH. I had to modify it to CLASSPATH=as2.jar:jetty/start.jar:jetty/lib/servlet-api-2.5-6.1.1.jar Once the appropriate changes have been made to the start scripts, just run: ./mendelson_as2_start.sh from the install directory. Configuring the Local Station Before the server can receive messages, it must be configured as a local station. By default, a local station will already be set up. The parameters just need to be adapted to match the actual environment. Pretty much, all you will have to change right off the bat is the MDN (URI), which is set to a mendelson domain. While here, you will also need to select the keys you generated above for the local station under the Security tab. Configuring Partners After a minute or two, the GUI will pop up. It is here that the AS2 partners must be set up before files can be exchanged. Take the following steps: Click the button labeled "Partner" (or go to File → Partner). Fill out the forms. The rest of this should be fairly obvious, but to go over it: Misc Name AS2 ID email address - a contact comments Keys - if you imported the keys above, the certificate for the trading partner should be available from the drop down. Select it or bad things will happen—I promise. MDN - the URI of the recipient. Click Ok Sending Messages This part is easy. Copy a file to the intended recipient’s directory on the server. By default, mendelson is set to poll for new files every 10s (a little inotify support here would rock). In general, from the mendelson install directory, the location will look like: /messages/ /outbox Conversely, the messages will be received on: /messages//inbox Copy and run. The main windows on client and server will show their respective progresses. Configuring HTTPS At this point, if everything has gone according to plan, messages can be exchanged in plain HTTP. In many situations, however, we want to exchange messages over HTTPS for added security. To do this, we must: Configure Mendelson AS2 to use HTTPS Generate new keys for the HTTPS store On the sender, import the certificates for the recipient This may sound a little confusing, given that we discussed generating keys above. It turns out that Jetty (the HTTP server & client that Mendelson AS2 uses) has its own separate, independent keystore for sending over HTTPS. Moreover, the keys are expired, which is probably just as well because it makes us generate fresh ones. Under the main Mendelson AS2 directory, there is a directory named jetty/etc containing Jetty’s configuration files. Jetty itself uses jetty.xml and an example configuration for SSL is in jetty-ssl.xml. Copy the following code from jetty-ssl.xml into jetty.xml: 8443 30000 /etc/keystore OBF:1vny1zlo1x8e1vnw1vn61x8g1zlu1vn4 OBF:1u2u1wml1z7s1z7a1wnl1u2g /etc/keystore OBF:1vny1zlo1x8e1vnw1vn61x8g1zlu1vn4 You’ll notice that it references the defunct keystore. Create a new keystore, populated with two keys from the following commands: keytool -genkey -alias Key1 -keypass changeit -keysize 1024 -keystore my.keystore -keyalg RSA -storepass changeit keytool -genkey -alias Key2 -keypass changeit -keysize 1024 -keystore my.keystore -keyalg RSA -storepass changeit where my.keystore is the filename of the new keystore and changeit is the password for the store. For password, keyPassword, and trustPassword, put the values corresponding to those used in the keytool commands. From the destination, export a certificate and import it into the keystore that was just created. Conclusion Once you have the AS2 server up and running, the process of adding real life partners is fairly similar. The only other parting tip I can offer (thanks to the forums) is that if, at any step, something goes wrong the start up script can be patched to provide a lot more debugging information by changing the last line of mendelson_as2_start.sh to read: java -Xmx192M -Xms92M -classpath $CLASSPATH -Djavax.net.debug=all de.mendelson.comm.as2.AS2 the addition is -Djavax.net.debug=all. As implied this will dump all sorts of goodies to the terminal. Appendix - Signed MDNs As of Mendelson AS2 1.1 - build 29, there is a bug that causes verifcation errors with signed MDNs sent by mendelson AS2 to non-mendelson AS2 servers. The solution is to get the b29 source module from CVS (on Sourceforge), change all occurrences of "\n" in the message strings to "\r\n" in MDNText.java, then navigate to de/mendelson/comm/as2/message and run: javac -classpath /path/to/as2.jar MDNText.java this will create a new MDNText.class file. Unzip the jar somewhere that will not trample anything, replace the MDNText.class file, and create a new jar. This updated file should solve the problem. At least, it did for me. Your mileage may vary, of course. Sources Post #3 of the "quick start guide" on the mendelson forums Mendelson Project Page Web browser interface not working Windows linefeed in mec_as2_start/stop.sh scripts SSL Problems b27 start error on Linux (solved) Default password in the Keystore password? HTTPS Communications Difficulty Message digest mismatch in signature
  • The "Business Perspective" is a False Canard

    Flipping through the C++ FQA, the phrase "from a business perspective" popped up a number of times and it occurred to me how often I have heard that phrase or something like it to refer to the needs of the management as opposed to the needs of a programmer. In fairness, I must also add that the author of that fine document is semi-quoting from the C++ FAQ. As I stared at those words, something jumped out at me: when it comes to tech, there really isn't any such thing as a "business need" because the geeks and the suits ultimately want the same thing. What kinds of things do we find in the "business perspective"? Well, how about these: Economy of price - we need to keep costs down in order to increase our margin Economy of time - closely intertwined with economy of price, but still separate in that we want to get our product to market ASAP, even aside from price, to help grab up marketshare Capability - it must do whatever it is that we need it to do That should pretty much cover it. The friction between the two groups does not come from these basic wants. Developers do not want to work longer on a project than is necessary. By and large, they want to do it and move on. About the only time I see that there is a real collision, is when developers try to make their own jobs a little more interesting. Even here, we see that this is mostly unconscious. The developer trying to interestify the job usually believes consciously that they are solving some problem that is stalking the whole project. This is hardly unique to the developer side of the equation as we (or, at least I) have seen the business types getting all distracted by shiny little trinkets. So, then, at the end of the day, the friction seems to come less from the core concerns (which are, more or less, shared by both parties) but how they are perceived. But the phrase "business perspective" is a lead-in to a pack of nonsense.
  • MySQL Hatred

    Further anti-MySQL bile. I hate MySQL, this morning. While doing some nice sysadmin type stuff, I wanted to either lock a database down or (better yet) take it offline completely, while leaving everything else untouched. A quick command in MS SQL Server or a few clicks of the mouse, if you are so inclined. Easy. MySQL does not have this basic, basic admin feature. Hacks from the intrawebs include: FLUSH TABLES WITH READ LOCK Changing user permissions. What the heck? I have to tinker with user permissions to TAKE A DATABASE OFFLINE? And flush tables locks all right--every table in every database. If you're running one database, that's fine. Me, I've got closer to 30. Sure, in our case, this is because we have a couple of apps that are badly designed.  But still. What if I had two? Say, a blog and a wiki? Same problem. Take one down, take the other down. Or fiddle with permissions. I'm sorry, this is just wrong.
  • QuickBooks Hack

    Recently, at the old work place, I encountered a problem on one of our accountant's workstations. On this station, starting up QuickBooks Pro 2008 caused the Windows Installer to pop up (usually locking somewhere around "Preparing to install...", but sometimes making it to the install screen). Going through the installation (again) did not work. Intuit has a knowledgebase article on this (http://support.quickbooks.intuit.com/support/pages/knowledgebasearticle/1005515.html) that makes three suggestions: Reregister QuickBooks's DLLs with the reboot.bat script in the QuickBooks directory Repair the .NET framework versions (1.1 and 2.0, in this case) for the QuickBooks version Reinstall QuickBooks In my case, none of these worked. A blog entry, which I cannot seem to find now, suggested uninstalling QuickBooks, removing the .NET framework and reinstalling. For good measure, it also recommended using some tools available for download through MSDN to completely nuke .NET from the system. This also, did not work. What I found did work, however, was running QuickBooks as another user on the same machine. Same permissions, mind you, as the accountant, but someone else. This worked, but was a pain in the neck, as I had to keep logging the user into QuickBooks. The final solution, was to wrap this little bit of hackery in a batch script, create a shortcut to it, and replace the user's icons (desktop & start menu) with the shortcut to the batch script--oh, and change the icon for good measure. The batch script follows, with the username expurgiated: runas /savecred /user:equesada\Administrator "C:\Program Files\Intuit\QuickBooks 2008\QBW32Pro.exe" runas /savecred /user: "C:\Program Files\Intuit\QuickBooks 2008\QBW32Pro.exe" After trying this several times, it appears to be working fine. My assumption is that the problem lies somewhere in the registry settings for that user's profile. It isn't some sort of broad permissions issue, since the user was able to start the program fine before. Even now, the only problem is that it loops into the installer. Several registry scans and cleanups failed to find the problem. Why does QuickBooks have to be such a pain in the neck?
  • Ant

    I poked fun at Ant before. I applaud the devs for trying to develop a better development tool. I also see the usefulness of a less Unix-centric build tool when writing in Java (WORA, right?). The problem, as I see it, is that their cure is ultimately worse than the disease. It is annoying to have to deal with hard tabs in Makefiles, but can anyone really claim that handwriting XML is more pleasant? I certainly cannot. A couple of commands in vim and the Makefiles are easy enough to work with. Nothing can take the pain of XML away--not even the hierarchical editors that are becoming common. Another virtue of the Makefile is its beautiful simplicity. We have dependencies and then we have commands that can be used to update those dependencies that are automatically generated. An Ant build file, on the other hand, requires that all of its tasks be Java classes. This makes Ant much less useful in the generic case. I have been playing with some literate programming lately and have to tangle the source out of the original Noweb file. In a Makefile, this is easy enough. In an Ant file, I have two choices: break out the Java compiler and write an Ant task to handle Noweb files cleanly. Or I can use the exec command (at least, I think that is what it is called--it is what NAnt calls it). If I do the former, I get a lot of overhead to do something simple. If I do the latter, I have an ugly XMLified Makefile. In the final analysis, I think Ant would have been a lot more useful if it had kept the Makefile's cardinal simplicity and removed the ugly parts (hard tabs). To make it truly platform independent, common shell commands (copy, delete, etc.) may have needed some massaging by the Make system.
  • Unixing away from phpMyAdmin

    Here at the ol' job, we use MySQL (something that I have blogged about before) and, naturally, have phpMyAdmin installed. I seldom use it as I prefer a a nice, CLI interface. It does provide a few amenities that have log me in even when I don't strictly speaking need it. Some of these are the editing feature (yes, I am lazy enough that sometimes I would rather not sit down and write out an UPDATE query), the printing (which is much nicer than out-of-the-box lp or lpr on *nix machines), and dumping stuff to CSV or Excel (which is nice for one-off reports that I occasionally have to run). The last couple of days, while working on some reconciliation type reports that get a little involved, I decided to take advantage of the Unix philosophy (a tool for every job, do one thing and do it well, etc.) and make my life quicker and easier from the MySQL command line. So, here is a look at the various tweaks I've made. The first thing to look at is paging. The client doesn't do any out of the box. After jacking off with a handful of pagers (less, more, most, and w3m) I decided on w3m for reasons that will soon become fairly clear. To make mysql page, simply run the command: pager w3m or more, less, most, etc. Whatever command you want to be the pager. This setting can be made permanent Next up, we have printing. This is why I chose w3m. less and most provided no way that I could see to pass the piped-in text off to a printer. If some pager coniusseur would care to correct me on this score, I am all ears. Set: keymap C-p SAVE_SCREEN "| a2ps -r test.txt -1 -r -f 7pts" keymap q EXIT keymap C-p SAVE_SCREEN "| a2ps -1 -r -f 7pts" keymap q EXIT The second item maps q to exit without confirmation. Out of the box, w3m always prompts. I hate being prompted. Remove at your liking. The first line maps the sequence Control + P to a SAVE_SCREEN command (which is used to dump pages to files) and then pipes it to a2ps. You can look up the options for a2ps, but the end result is that, since no output file was specified, a2ps prettifies the text handed it and sends it off to the printer. Finally, we have that little problem of dumping to Excel. We do not have to dump straight to .xls or .xlsx format. CSV will do, despite being a poor format in general. MySQL can do this part natively by running a query like so: select * from foo into outfile 'someplace.csv' fields terminated by ',' lines terminated by '\n'; This is nice, but, speaking for myself, I usually review the results before dumping them out, just to be sure it looks roughly the way I want or expect. Another good way to do this, is to simply put the query into a file and run it like this: mysql -uuser database When in noninteractive mode, the mysql client outputs the records in a tab delimited format. Piping this through sed and into a text file will create a simplistic CSV or opening it in tab-delimited form with a spreadsheet app (like Excel or OpenOffice Calc) will allow it to be exported to a more friendly format.
  • These Guys are Heroes

    If you saunter on over to Vimperator.org, you will see some heroic members of the hacker race. From their website: "Writing efficient user interfaces is the main maxim, here at Vimperator labs. We often follow the Vim way of doing things, but extend its principles when necessary. Towards this end, we've created the liberator library for Mozilla based applications, to encapsulate as many of these generic principles as possible, and liberate developers from the tedium of reinventing the wheel." Also deserving of honorable mention are the denizens of "suckless.org". Their philosophy is (again, from the website): "Our project focuses on advanced and experienced computer users. In contrast with the usual proprietary software world or many mainstream open source projects that focus more on average and non-technical end users, we think that experienced users are mostly ignored. This is particularly true for user interfaces, such as graphical environments on desktop computers, on mobile devices, and in so-called Web applications. We believe that the market of experienced users is growing continuously, with each user looking for more appropriate solutions for his/her work style." In both cases, it reminds me of what I wrote about usability vs. learnability, which is, undoubtedly, why I like it. There are only a few safe havens for the advanced user. Most of the world is trying to build idiot-proof systems. Some of us want power tools. We want tools with pointy edges. We want to be able to do grand things--and this inevitably means having the ability to wreck our own systems. Despite the small groups interested in a more advanced usability, the world will by and large never accommodate us. Part of the reason is, of course, numbers. I'd rather sell an application with a potential audience of 12 million than of 1200, wouldn't you? But that is not the whole story. Small groups can and have sufficient buying power to warrant some attention. The biggest problem is that it is an audience that can create software to its own specifications and does not need some third party to build it. Worse still, if someone were to provide software for this exclusive market, anything creative or innovative would swiftly be copied into a new project, because the audience is made up of infernal tinkerers. So, we come back to the place that we started from. Small, dedicated groups of people dedicated to creating truly usable software.
  • Ant and Irony

    The authors of Ant deserve a prize for irony. In the manual's introduction, it says that "Makefiles are inherently evil as well." I won't deny the shortcomings of make, but these guys say that make is evil, then go and build an XML based build system. I mean come on, if there is anything worse than worrying about tabs it is having to manually write XML.
  • Ruminations on Literate Programming

    First off, I would like to begin by saying that this post will be a little different than usual. It is not so much an explanation, a tutorial, or the asserting of an opinion (and you all know what an unopinionated fellow I am), as it is a monologue like discussion. Running through possibilities, tossing out ideas, but it is not likely to present any firm conclusions. So, here we go. I recently read Donald Knuth's paper on WEB, a literate programming system that he wrote with others for their own use. The paper is listed in the references section. At first glance, literate programming makes perfect sense in academia. Code is not written that is not intended to be published as either a paper or a book. Using literate programming makes the task of doing both easier. As the content of the code changes, the commentary itself is readily changed to match. The question that comes to mind, though, is whether or not literate programming has potential for the working programmer. In Academia, the real work is generally not the programming. The programming itself is merely a way to try to prove whatever the hypothesis is. It is the equivalent to a test in a physics laboratory. The working programmer is not using his code as a mere test. It is the final product and it has to work. Moreover, it must also be delivered in a reasonable (or, more often, unreasonable) amount of time. In this different atmosphere, does literate programming still have a place? Would it work as well for someone writing code to track truckloads as it does for Knuth when he writes his books? At first glance, it would be easy to say no, but Knuth extols the methodology for reasons that most programmers would find appealing. He intimates the LP makes maintenance easier. If there is anything the working code monkey would love to see, it is an easier job in maintenance. Most of us have had that experience of looking at a screenful of code and wondering what he (or I) was thinking when this was written. If we are writing down are reasoning with the code, then the questions go away. We may not agree with the reasoning, but at least we would understand the angle from which the problem was hit. Naturally, most people would do as poor a job of maintaining an essay as they would the comments (there are virtually no comments in my production code). As with any methodology, its utility stands on its practitioners, not on its non-practitioners. On StackOverflow, several users run down the idea as being outdated or outmoded, being suited to the dark days of when we were limited to two-character variable names. While the utility may be increased under such conditions, they have missed the point. Literate programming is not about writing a lot of comments--it is about writing a book or article on the problem, side by side with the problem's solution. Literate programming is not an idea confined to a specific time. It is not a hack (as intimated). It is a way of looking at programming that turns the whole process on its head. The machine becomes auxiliary, the human audience becomes primary. It may be that this approach does not hold practical utility--but it is not something to be as lightly shoved aside as the idea of starting a completely new and independent piece of software in RPG III. These rambling thoughts led me to look into some present day tools (even Knuth's own WEB has been superseded, it seems). The one with the biggest following is noweb, which is language agnostic. My biggest complaint as I fish through the tools I could find, is that they were almost universally using TeX as their typesetting format. Historically speaking, this makes sense. Knuth wrote WEB and TeX and, more specifically, he wrote WEB for TeX. I, however, do not want to compose text in TeX or LaTeX. As I have written before, it is just to cluttering. There are a few out there that rely on something else. I found one that used wiki syntax. At least noweb supports HTML mode which, while still imperfect for composition (as an interchange and basic display format, it is excellent), is at least usable. Any value that LP has will largely rest on the fact that it forces the programmer to think a little bit more about what he is doing as he is doing it. In this way, it is not unlike Haskell's type system (which also makes it unsurprising that the Haskell community is one of the more vibrant outposts for LP). A lot of questions still remain. Most LP tools are usable for standard write-compile-test cycles. For languages like Lisp, a separate tool would have to be created (not that a lot of weekend warrior projects do not already exist). On StackOverflow, a few users expressed concern for how you would use LP in a collaborative environment. Personally, I would suspect that it would work similar to the way that most technical writing team works: divide and conquer. Distributed source control systems like git or darcs make this even easier. So what is it then? Academian pipe dream or underused tool? There is only one way to find out. Try it. References http://stackoverflow.com/questions/299076/scaling-literate-programming www.literateprogramming.com/knuthweb.pdf
  • The Old Schooled Office Suite

    Lately, I have been working on getting my few office suite needs moved over to something a little more text based. I find GUI apps pretty inefficient for these kinds of tasks. Understand that this is not an open source vs. closed source discussion. OpenOffice.org Writer and Microsoft Word get grouped together, as one is basically an open source cross platform version of the other. This is actually contrasting two different application philosophies. I am, in fact, looking for something that is more usable. Usability is a commonly trotted out concern when evaluating software. Now, when the word is thrown out, it is loaded with a specific meaning that is actually improper to its use. Usability is conflated with something that has a flat learning curve. So flat, that it is assumed that the hypothetical user could learn the application every time they start it. It is assumed that it usability refers to the flat learning curve as opposed to a tool that requires a little more upfront effort. This is unfortunate, because usability is actually a slider between the two extremes of "easy to learn" and "efficient to use". Where a "usable" application falls on this continuum is up to the user in question. Commercial software naturally gravitates towards the "easy to learn" end of the spectrum, because it is the end that accommodates the greatest slice of the populace. Even the more hardcore of us can get by with software that still has the training wheels on. So, the usability I am looking for is one of efficiency not of expediency. I want apps that will allow me to produce what I want fast and roll on. I am willing to accept a steeper learning curve upfront, it it means time savings later on. This is, for example, why I am using Vim for day to day text editing (like programming) rather than jEdit or syn. Sure, it took longer to learn upfront, but I can move through a text file fast. Some other considerations for me was that I like using tools that I can use from a straight terminal because it means that I can access them over SSH. No need to sync files, just remote in to the machine carrying the files. This may not be a concern for most users out there, but it makes my life a lot more convenient. As an example, this little spiel started with the search for an application I could use to draw process flows in a declarative manner. It does not take much to be able to pull open Visio or Dia and draw the diagram. When you think about it, that drawing process takes longer than seems necessary. You click the tools in the pallets. You click through the properties. You drag. You drop. You do a lot of work to draw a picture. However, if you can simply declare the relationship, it would be a lot more efficient. That is what I was looking for, but what I’ve set up is a nice little replacement, for everyday use, for OpenOffice.org/Microsoft Office. So, here goes my list. Graph/chart drawing & diagramming Graphviz. More typically used for diagrams produced in mathematical articles (writing on graph coloring algorithms, for instance), this little tool will produce directed or non-directed graphs. The syntax is simple and C-like, allowing one to declare nodes, with optional properties, and edges connecting them. Email The biggest two CLI email clients are mutt and pine. Mutt is clearly the more configurable of the two, but Alpine (the new, completely OSS version of Pine) is quick and easy. My biggest annoyance with Alpine is its heavy verbosity. There are plenty of "are you sure?" prompts. Ultimately, I found Sup. Sup is self-described as being a sort of text-based version of Gmail. Sup wins hands down with a nice interface and a powerful search engine. Word processing Word processors have gotten huge. Nonetheless, most of what they have to offer is seldom used. The Word power users I have heard mention this always do so with a touch of regret. As though we all should spend more time learning to use Word. The fact is, most of those features go unused because people don’t need them. I fit firmly in that category. I wanted a markup system I could use so that my documents would be composed in pure text (allowing me my precious Vim), but easily run through into something more print ready or presentation ready. I considered TeX or LaTeX briefly. However, two things about them bothered me. First, there was the sheer ugliness of the markup. The large quantity of line noise interjected into my manuscript as I was composing was simply too much. It distracted too much and was too cluttering. Secondly, there was the heavy mathematical and scientific bias. A good portion of the documents I work on are related to my fiction writing. The ability to pretty print mathematical formulae is not helpful. So, the trade off was simply not worth the benefits. Wiki syntax is a good example of what I wanted. Plain, non-cluttering, and very close to an ASCII version of all formatting. A little bit of research brought me to Markdown, Pandoc (which accepts Markdown), and AsciiDoc. My favorite of these is AsciiDoc. Its native backends are HTML and DocBook. However, DocBook (around which AsciiDoc provides a wrapper utility) can be converted to many formats, including PDF, ODF, and RTF. So, if you use this, you can create something that just about anyone can open. Pandoc actually offers more native backends than AsciiDoc (including DocBook, so the same ups apply), but generates plainer HTML. Sure, I can style it. I can make it default by aliasing pandoc to pandoc -c mystyle.css. Someday, I may even do it, but right now I am lazy. I don’t want to have to write a reasonable CSS file for quick and dirty use. So, I am using AsciiDoc for now because it caters to my sheer laziness. Spreadsheets Most uses of spreadsheets are evil. It seems that a lot of the world tries to use Excel like a database system—which it isn’t. However, their utility for quick little things cannot be ignored. Besides, I still have to deal with the spreadsheets I receive. I found two modern day, command line driven spreadsheet systems: sc and Oleo. sc is older, its source having its origin as a public domain application on USENET. Its interface is largely inspired by vi and less. Oleo is GNU’s spreadsheet application and so, understandably, takes its cues from Emacs. The most important command, C-x C-c, is the same. Naturally, when you put it that way, I have no choice but to fall in with sc. It is a nice little piece of work, allowing quick manipulation of data. Neither of these applications will generate pretty pictures for use in a word processor, but we quickly find ourselves rescued by the UNIX philosophy of a tool for every job. sc/Oleo do not need to provide ways to dump out pretty pictures—that is not their job. They crunch data. We use something else to create graphs. Tasks Those TODOs that we all have to keep. Microsofties keep them in Outlook. Anti-computerites either keep them on paper or not at all. The best text-based system I have seen is TaskWarrior. It is truly awesome. Calendering Another application that is integrated with Outlook and Exchange for most users. The command line applications are, of course, more loosely coupled. The best event scheduling calendar I found was Pal. Pal looked like it was more heavily geared towards the way people think of calendering. Instant Messaging IM is not a part of the traditional office suite, but, it is becoming a regular part of most offices. Finch is the clear winner on this front. There are plenty of text based IRC tools, but Finch is built on top of libpurple and supports all of the protocols that Pidgin does. Music player Again, I suppose this is not really a part of the traditional office suite, but we all want one. There is a reason Windows ships with Media Player. It is because Microsoft recognized that people wanted one and if it didn’t ship with Windows people would (horror of horrors!) use non-Microsoft software. My favorite GUI music player is Amarok. On the CLI side, Moc and CMus are excellent choices. MOC seems a little more intuitive. Loose Ends Unfortunately, the formats version of musical chairs does not usually permit us to keep everything strictly text and HTML. Those of us who are still part of the real world have to accept and send files in popular formats. Fortunately, OpenOffice.org allows exchanges between the backends of many of the formats that would be required. Unoconv, a Python script to interface with the headless version of OpenOffice, makes those conversions easy from the command line. I am sure that there are people for whom this type of setup would be simply unworkable. Computing ability aside, it emphasises work on content, not work on layout. Obviously, if you are making flyers every day, AsciiDoc is not for you. You could insert images (generating your word art through some ImageMagick kung-fu) and generate a PS file that you lpr off to your printer, but it would probably not be an optimal workflow. However, I do work on content. My programs are content. My stories are content. I am a content person, not a design person, so, for me, this is an excellent layout.
  • What is the Point of this?

    I recently stumbled across some articles on WS-BPEL. BPEL stands for Business Process Execution Language. At first this caught my attention because, well, it sounded like some potentially slick DSL that would help describe business rules and execute them. Slapped in front of a good domain-specific API, something like this could help slash development time. Of course, such things are usually little more than pipe dreams, but today’s pipe dream is tomorrow’s brave new world. So, it is always better to keep an eye on things. Perhaps the first tip off that this had nothing new to offer is that BSPEL is based on XML. Seriously, how can much good come from XML? Even the few times where the end result is cool (like WSDL and SOAP), a better interchange format could have been chosen. Imagine, for example, a YAML or JSON based web services platform? With wider support that would just rock. But I digress. Here is a tutorial of sports on WS-BPEL. When you get past the buzz words and the fancy terminology, you have an XML based scripting language to tie basic web services together. Pretty disappointing. After looking at the examples, I do not see any way that this wins out over using Java, C#, or PHP. It is quite a stretch to refer to what this thing does as having anything to do with "business processes". Even an IBM reference on the subject just shows a few simple control mechanisms joined up with the ability to call web services. So, if you have seen this used in the wild to an efficacy above and beyond typical programming or scripting languages, please drop me a line or a comment—because this looks like buzzword tag soup.
  • What I Want in a WM

    I have been looking at various Window Managers lately. The major desktop environments leave me feeling a little dissatisfied. KDE - I loved the 3.5+ branch of KDE. However, the world is moving on and that may not be an option much longer. The 4.x branch has been unstable on my machine. It works all right and looks excellent, but my desktop sessions disappear every so often. Gnome - Gnome treats me like an idiot. I don't like patronizing software. It is too eager to offer me information I don't want. Both KDE and Gnome offer a wide variety of keyboard shortcuts, but there are many operations that can only really be done with the mouse. In both cases, I want every single window manager operation to be performable from the keyboard. Again, speaking of options, I don't think it needs to be required, but there should be no less power at the keyboard than there is for the mouse. A quick look at the alternate window managers of the Linux world show that a lot of other hacker types want the keyboard capabilities I want. In addition, the window managers are infinitely extensible. However, I have two major problems with every one of the window managers I have looked at or tinkered with: They are ugly. I don't need Mac OS X beauty on the desktop, but I do not want something that looks like a '93 desktop put through a blender. I have to look at the screen a lot of the day. It doesn't have to be drop dead gorgeous, but it can't look like pure trash either. They drop their extensibility in my lap. I appreciate that every single detail can be tweaked, but not every detail should be tweaked. Some reasonable defaults are vital, especially when evaluating window managers. If I have to spend a few days reading their APIs and writing customization files in various scripting languages, why should I invest that much effort just to learn whether or not the window manager has the potential to be what I need it to be? Especially hard for me to understand is the first one. I can understand that to the small communities of hackers working away on their window manager, the extensibility is exciting. Since they know every detail of the innards anyway, it doesn't annoy them to have to look at the API first. But how can these themes be seen as anything other than astoundingly ugly? Browsing through a lot of the theme galleries just make my eyes hurt. They make Windows 95 look like the space age of computing. So, here is the window maker/desktop environment (I do not care to haggle about which it is) I want looks like this: Keyboard it all. If I want to move, resize, retile, restack, etc. windows from the keyboard, I can. Reasonable defaults. The full functionality is available withoutvhaving to fiddle with personal startup files. I can start it up and begin tweaking it from there. Easy on the eyes. Antialiased windows, rounded off window borders, and tasteful window decorations are key. Again, Compiz/Vista like effects are not necessary. Just some good taste. A respectable collection of widgets. Look at OpenBox or Fluxbox. They have many panels written for them. This is a good thing overall, but could we start by having at least one good panel included in the default distribution? I am still on my quest. I suspect that if I work hard with Fluxbox or Awesome, I will be able to get what I want. Whether I have the time for this in the near future is, of course, another question entirely.
  • I Hate Info Docs

    GNU's insistence on building their own documentation system instead of sticking to good old UNIX's man pages is obnoxious. They provide second rate man pages and pretty good info docs. Why? Why not use some sort of generator that spits out the same data as both man pages and info pages? Anyone who wants to navigate from Emacs or use the schmanzy fancy navigation can use them and those of us who prefer man will still get full documentation. Doesn't seem like so much to ask. So, GNU: knock it off. I hate info and trying to suck me into using it isn't going to help matters any.
  • So Much for WORA

    The promise of Java is WORA-Write Once, Run Anywhere. I decided to tinker with the Blackberry development one of the options for which is Java. So, I set up Eclipse (which is way harder than it should be; people found Vista's branding or Linux distros confusing? Eclipse makes them look positively simplistic) and installed the Blackberry JDE plugin. After restarting Eclipse what greets me? A message that the IDE could not find a Windows USB DLL. I'm running this on Linux because I thought Java was platform independent. There are some USB wrappers in Java that hide the OS. What is the point of using Java if we are going to be tied to a single OS?
  • Levenshtein Rocks

    The company I work for is running a project in which various numbers are getting scanned. Often, the barcodes were missing or illegible and had to be typed by hand. On the backend, we found that a great many of them were subtly wrong. For example, O (letter oh) and 0 (number zero) were swapped. Well, it's pretty easy to drop in a quick AJAX callback that checks the barcode number to make sure it is on file. I thought it would be cool, though, to have the program suggest the correct number to the user. If they were right and it was just something we hadn't seen yet, then they could just leave it be. If not, the system would give them a much better idea where they were messing up. Meet the Levenshtein distance. I had heard of it before (it is commonly used in spellcheckers), but never had a reason to use it. A quick googling showed gave a blog post in which the writer implemented the dynamic programming implementation of an algorithm to find the Levenshtein distance as a MySQL UDF. It worked beautifully.
  • Live and Learn

    I just learned yesterday that MySQL has a limit of 61 tables to a join. Who knew?
  • Why Am I Still Hacking This?

    This weekend, I spent a bit of time working on Latrunculi and my wife asked me why I was working on it, rather than one of my more businessey ventures. Well, the short answer was that Latrunculi has been a labor of love for a long time now. It started as an exploratory project, meant as a way to learn some basic AI algorithms, while underemployed (not unemployed--when unemployed, I had no time for Latrunculi) and expanded beyond it to a much larger vision. The goal is for it to have bleeding edge AI with 3D graphics and an excellent user interface. Latrunculi also represents a lot of growth for me. The initial work was done in Chicken Scheme (an excellent R5RS Scheme implementation, I might add) and is presently happening under CLISP. The switch was done because the SDL bindings were much more mature in Common Lisp. Additionally, I wearied of the ad-hoc way I had to assemble pieces of the Scheme language. Arrays? Vectors? Nope. They're in a SRFI, though. Modules? Not part of the language. And so on. Common Lisp has a much more practical bent to it as a language. Especially in the very first revisions, there is a lot of code that I am not proud of. The graphics code, especially, relied heavily on side effects and had a lot of poor variable names (brd for board). The number of set!s is simply revolting. It is coming together, though. A lot of the side effects have been cleaned up. On my personal git branch, I am starting to set up some unit tests. After a couple of things are done, we will have something resembling a real game, only with terrible AI (which is where I have been looking forward to being, again; I've read some papers and plan on doing a complete overhaul of the AI code). Since the initial burst, work on Latrunculi has been sporadic. This is the project I do when the boys are napping (like that ever happens) and when all the house is quiet. It has always been that little spot of technical refuge from the grind in IT. Latrunculi is one of those embodiments of why I love this trade so. This idea of building something beautiful and, at the same time, usable is appealing. This week has been one of the longer ones at work, with a new project getting underway. It is after times like pounding on XML dumps and inventory reconciliations that the algorithmic challenges and graphics programming are so soothing.
  • Moving Time

    I am moving Latrunculi's headquarters over to Mad Computer Scientist. Information is on the Latrunculi page (see the navbar on the right). I will be updating the Latrunculi page and the SourceForge page shortly. After I finish out a few graphics bugs, it will be time to get cracking on the AI.
  • Hosted Code

    I currently have two projects on SourceForge (Ocean and Latrunculi). After reading Eric Raymond's recent series of posts on Forges (not limited to SourceForge), beginning here, I am seriously considering migrating these little buckets of code over to mad-computer-scientist.com. I could move the repos over to Git pretty easily, it looks like (which really sets Git apart from many of the other, wonderful distributed SCMs i've used). Some quick wikis and we're done. Fossil is another possibility, offering the wiki and bugtracking for free as it were. Raymond's vocal denunciations bring my own nagging paranoia to the forefront of my mind. I don't know. We'll see.
  • Knuth is a breath of fresh air

    I am almost finished reading a paper by Donald Knuth (about which another blog post will be forthcoming) and I must say that it is a breath of fresh air. Something about reading this simple paper in front of me is exhilarating in a way that I seldom feel and it is because of something so very simple: Donald Knuth cares.  He cares about computer science, he even cares about programming. He sees it as a fresh art. Those little things that seem to get brushed off like beauty, both algorithmic and typographical, actually matter to him. He cares about the human beings reading the code as much (dare I say more?) than the ones using the code. He clearly has some fun while he does this (the system WEB is made up of TANGLE and WEAVE, echoing Scott's "What tangled webs we weave / when first we practice to deceive"). One does not crank out code or even write a program. One composes a program. Perhaps I am showing my geekdom here, but it is a real pleasure to read someone who does care about these things. When we are programming, we are usually working with people who would rather have an impacted wisdom tooth than hear about what we do--ephemerally referred to as users. Then, with usability receiving its due as of late, we have a nice chunk of programmers who are trying to live as Spartan coders, thinking naught of their own pleasure but only of "the user." Then, we have the majority of those in the field who are punching a clock and do not care about the job either. In short, the world is crammed full of people who just do not care about programming as art and literature and something beautiful and pleasurable in its own right. It is nice to read someone who does.
  • Piece of CookiePie

    When testing a web site with various levels of user permissions, I thought it would be useful to be able to login with different accounts on different tabs of one Firefox session (especially because that's all you can really have of a given profile). Towards this end, I found CookiePie. It works great and, as advertised, lets you keep multiple sessions of a web application running. However, after a little bit of experimentation today, I found that it interferes with at least three web apps: VMWare 2.0 web Interface Facebook Napster web interface (no, I'm not a pirate; the first time I used Napster was after it had become a legal, paid service). So, unfortunately, CookiePie must remain disabled on my Firefox profile. I need those apps a lot more on a daily basis than I need CookiePie. Still, it would be nice to see if they can clean up the bugs that make it interfere with other apps. I wasn't using CookiePie on any of the above, so it shouldn't have been interfering with them. It is possible that the only way to get this functionality right will be to bake it right into the browser. To the best of my knowledge, this has not been done on any major browser.
  • Am I the only one...

    ...who finds the Google Earth icon eerily reminiscent of AT&T's logo?
  • Paperless Offices

    I recently inherited a large set of files from a coworker who had moved on to greener pastures, by which I do not mean I received a zip full of Excel spreadsheets. These are honest to goodness paper files and folders. Working in IT, you would think that if the paperless office had arrived, it would at least be present in that department, if no other. Alas, it has not. This is the second set of files that I have inherited during my time here and my own set of files is getting to be fairly large. Most folders are stuffed with hand-written notes I took during various planning, scoping, and implementation stages. The point here being, I am throwing no stones, but am making an observation: even in technical fields, few people are actually implementing paperless offices. If anything, the laser printer allows us to run off far more paper than ever before because its low threshold of use and expense allows people to use it for far more trivial things than would have ever been justified in the dark ages. Computers allow us to store more information than is readily doable with paper, in a more portable fashion (most laptops have enough hard drive space to carry libraries of information, available at the brush of the finger), more safely (some laptops and spare drives are a lot less of a fire hazard than cupboards stuffed with paper and cardboard), and more intelligibly (typed notes will be legible by anyone, whereas many people, like me, have terrible handwriting). Here, it would almost be convenient to scapegoat the older workers and simply blame them. It is their dedication to the old order that keeps the rest of us from making progress. There is some small amount of truth here. We have probably all seen that one person who prints off every darn e-mail only to turn around and file it in a cabinet. However, this would be greatly oversimplified. When I was in college, I noticed that few computer science students used their laptops for notetaking. There were plenty of laptops in the room, mind you, but most of their owners were surfing or playing solitaire (I played solitaire through a solid semester of Jewish History--did well on the tests, though). The students who actually took notes did so, by and large, on notebooks or in binders. So, that old guy over there might abuse the printer a little more than usual, but he is not the real issue here after all, the younger generation who is growing up on iPods and Facebook, takes notes by hand. The reasons for this, then, are more deep seated than a simple matter of generation. The sad truth for the proponent of the digital office is that computers are simply not convenient enough, yet, for this purpose. Notepads are still much more convenient than laptops for most purposes. When taking notes, it is not uncommon to be scribbling down diagrams and making outlines in ways that a computer may present better, but require a little more effort upfront. If you are typing your proposal, those extra few seconds to make the bullets look good is well worth it. If there is a flurry of talk in a meeting or during a lecture, those extra few seconds put you too far behind. Similarly, those diagrams may take, for a fast user, ten minutes to put together--but that ten minutes is simply too long when everything is happening realtime, especially when they can be sketched in seconds. Laptops, as light as they are, are appreciably heavier so it is a lot more convenient to grab a notepad than to haul out the laptop. Battery life is also a concern. Despite advertising to the contrary, the best you can usually do is a few hours of battery time at full use. Sure, you can get more life if you don't use the machine as much, but then it isn't as useful either. This particular group of objections should be handled within the next few generations of hardware. With netbooks becoming more common, general laptop size decreasing, and battery life increasing, this should go away quite soon. Cost does not really seem to be an issue anymore. Most college students have laptops--virtually all have computers and could have had a laptop should they have so chosen. Like I wrote above, the problem was not that students did not have laptops, but that they were not using them to go paperless. All right, then, if we can't get people to use their computers yet, what about digitizing the output? At least, we could save the storage and remove that old fire hazard. Not necessarily a bad idea, but retyping and resketching all notes is quite time consuming. Scanning presents another option, but at several times the amount of storage (so, storing those notes as TIFFs instead of TXT or DOC will tax your drive space more) with greater difficulty reading (scans are often, especially with penciled or highlighted text, harder to read), loss of the ability to search (which, as Google, Apple, and Microsoft are all realizing, is one of the most important abilities of computerized documents), loss of flexibility (when things change, altering those TIFFs is a lot harder than changing a text document), and poor software interfaces (have you seen most document management systems?) make this a loss. The real problem is ultimately one of convenience. We could bring our laptops to everything. We could type it all in Word or OpenOffice. We could use the touch pad to do all of our diagramming. But we don't because it is not sufficiently convenient for the problem at hand and the reasons lie in both the hardware and the software. On the hardware side of things, we need to see laptops that are even lighter (without loss in functionality; about the only thing I see being able to go is a CD drive--more and more software is web driven or, at least, could be deployed from another machine and more and more music is being stored digitally) with even longer batter life. Additionally, an easy way to make quick sketches is key. I am sure advancements could be made in diagramming software, but until someone can take a stylus and make a quick sketch as readily as they could with a pen, the laptop will still not be good enough. It would also help if the laptop screens were more like paper--in short, if we could see digital ink making its way from niche ebook readers onto laptops so that notes can be viewed cleanly and crisply in a way that will not tire the eyes the way traditional displays do. On the software side of things, we need software that is more conducive to taking notes in the way that people take them in real life. Outliners are good, but people do not take perfectly outlined notes on the fly--nor can they be expected to. Often times, notes are taken in brainstorming or design sessions. These meetings cannot be organized or else they will lose all utility. One of the benefits of paper note taking is the loose, semi-organized way in which notes and diagrams can be taken and mixed up. This would need to be made available through software. There is still the X factor. Speaking for myself, I enjoy the feel of handwriting and the look of paper. It is a relief after using computers and technology all day long to be able to look at and feel something different. I doubt that for me, personally, this will ever go away. However, by and large, except for a few strange people (like me; I even have a manual typewriter) this will fall away in the next generation or so leaving only the items above. So, will we ever see the paperless office? I do not think that question can be answered with any degree of certainty. My personal point of view is that the hardware will be there within the next ten years. The software is a trickier proposition--it could happen at any time. Tomorrow someone could write the perfect software or it could take another thirty years. Even once this happens, paper will linger a while longer.
  • A Quick PL Thought

    I hate programming languages that make me do a lot of typing at a stretch. If I can type for a long time, it means I don't have to think while I'm doing it and if I don't have to think about it, it can be automated--and if it can be automated, the blasted machine should be doing it anyway.
  • Open Source 3D Printing

    Recently, I was very surprised to come across some open source 3D printers. I stumbled across them looking to see how cheap one could buy a 3D printer (or "fabber"). Cheapest thing I found was about $15k. Certainly reasonable if you are a large corporation and this is for your engineers or R&D people, but not so reasonable if you are just a mad tinkerer like me. The schematics are free to download and the software is released under the GPL. Frankly, I am a lot more interested in those free schematics than in the GPL'd code, but I'll take the whole shebang. So far, I have come across at least three open source kits to do this: Fab@Home RepRap MakerBot There is even the beginnings of a community at Thingiverse swapping designs for objects that can be created with these home brew fabbers. This really has my curiosity running, at this point. Obviously, the components can be purchased for a fraction of the price that you would hit buying a "cheap" 3D printer. The idea has a lot of appeal for me, personally. When I was younger, I used to design board games on notebook paper, use pen and crayons to lay out the pieces and such, then tape all of the pieces together. I could go back to the old drawing board with the ability to do something a little more elaborate. Moreover, I've had various ideas for things over the years that this would have been excellent for. Finally, building the thing would be a heck of a lot of fun. When it comes down to it, you would be building a manufacturing robot. How awesome is that?
  • I Want One

    I just saw Mirage, an incredibly portable chess set, on Yanko Design. (http://www.yankodesign.com/2009/09/23/now-you-see-chess-now-you-dont/) Basically, it uses clip-down style pieces and a digital projector to project the board onto the table surface. The whole thing wraps up in a package clearly inspired by the iPod. So, while it will no doubt be out of my price range if and when it comes out, I still think I want one.
  • Lots of Insipid Stupid Parentheses

    For a bit of private research, I was reading some papers on MLisp, a Lisp dialect (pre-processor, technically as it simply compiles its input into normal, S-expression Lisp code) based on M-expressions. Given that the first paper I read was published in 1968, it seems that people have been griping about Lisp's parentheses for almost as long as there has been a Lisp to complain about. Of course, as Bjarne Stroustrup said, "There are only two kinds of languages: the ones people complain about and the ones nobody uses." Some of the original motivations behind MLisp have fallen away. For example, the MLisp User's Manual mentions three motivations (page 2): The flow of control is very difficult to follow. Since comments are not permitted, the programmer is completely unable to provide written assistance. An inordinate amount of time is spent balancing parentheses. It is frequently a non-trivial task just to determine which expressions belong to which other expressions. The notation of LISP is far from the most natural or mnemonic for a language, making the understanding of most routines fairly difficult. Both Scheme and Common Lisp (pretty much the only remaining living variants of Lisp) provide comments. Since R6RS, Scheme includes multiline comments as well as single line, so this motivation is clearly gone. Two and Three really have no business being separate. They both say that Lisp is hard to read, something that almost thirty years later, to the point where Peter Seibel's 2003 book, Practical Common Lisp briefly addresses this objection near the beginning. Here is a snippet of the result, from Enea, Horace (1968) MLISP A:=DO I :=I+1 UNTIL FN(1); RR:=#:acAD(),READ()>; A:=DO I :=I+1 UNTIL FN(1); B:=COLLECT UNTIL I EQ 'END; WHILE ,((A:=READ()) EQ 'END) DO INFUT(A); C:=WHILE 7((A:=READ()) EQ 'END) COLLECT 4.b; FOR I ON L DO FN(1); J:=FOR I IN L DO FN(I) UNTIL QN(1); FOR I IN 1 BY 4 TO 13 DO FN(1); FOR I IN 1 TO 10 DO FN(1); J:=FOR I IN L COLLECT FN(1); J:=FN(FUNCTION(+), FUNCTIC!N(TIMES)); J:=,>> SUB ~,l>; J:=q(a Y 2 ., 3 9 4 ? 5 Y 6Y 7 Y 8 Y gY o>); OFF; END. (Input follows end.) This MLisp, which looks like an evil union between Pascal and Basic, is the result of one of (if not the) earliest attempts to solve that problem of those pesky parentheses. So, over forty years ago, we get to see two traditions established: People whining about the parentheses in Lisp People using Lisp to build DSLs And the world ever is as it always was.
  • Web Servers in the Language du Jour

    Has anyone besides me noticed an increased tendency for people to write new web servers in their language du jour? For example, we've got the WebServer CodePlex project to write one in C# .NET. Django packages one written in Python, for development purposes. Ruby has Mongrel. There is Hunchentoot for Common Lisp. Heck, I even found a Perl one on SourceForge whose last file release date was in 2000. The height of absurdity comes with nanoweb, a web server written in PHP. That just seems wrong, like the programming gods should strike someone down for even thinking about it. That's right. It's not enough to watch the world blow security holes in PHP web applications, now they get to do it in PHP web servers, too. That's just great. Whatever happened to good old C-based web servers, like Apache? About the only one in that list I can really see is Django's. It really does simplify development by allowing you to push deployment details off until you are ready to deploy. Visual Studio does the same thing when you are testing ASP.NET applications. The other ones, though, actually want to be production web servers. Django warns you against deploying on the development web server. About the only way you could use Visual Studio's (which, dollars to donuts, is probably just a stripped down version of IIS) is to run the project in debug mode on the server in an instance of Visual Studio--which would be just plain stupid. Hunchentoot is also nice, because few web servers have good tools to integrate with Common Lisp. About the best you'll do is straight CGI or mod_lisp--and, with mod_lisp, you will still have to interact with the module at a fairly low level (which I found disappointing). If you are running a web application for the whole world to see, than you are far better off with a larger-scale HTTP server, like Apache, IIS, or Lighttpd. If you are using embedded applications, use one of the micro C-based servers--you'll need those precious ounces of resources that C can save even more if you are embedding the thing in a printer or something like that.
  • Dumb quote of the day...

    "The value of comments should be obvious: in general, the clarity of a program is directly proportional to the number of comments." --David Canfield Smith, "MLisp Users' Guide" Stanford Artificial Intelligence Project, Memo AI-84, page 5. I guess Mr. Smith never bumped into one of those programmers (we've all seen them), who do things like this: i=i+1; // add 1 to i Such programmers fill the source with comments that contribute nothing to the understanding of the flow of the program. Or, how about someone who does this: /*$foo='baz'; [Ed: snipped about three hundred lines ] */$a++; printf('hello, world!'); The Smith Conjecture, as I do here and now dub it, is fatally flawed, doomed to be replaced with: "Programmers are in a race with the Universe to create bigger and better idiot-proof programs, while the Universe is trying to create bigger and better idiots. So far the Universe is winning." (Richard Cook). Perhaps, if the whole world were full of Donald Knuths, whose literate programming considered every bit of code as being like a piece of literature, things would be different. However, we have a lot more blub programmers running around abusing comments to their maximum. I would also argue that few real life systems can afford to self-document, whether done by hack programmers or true craftsmen. If you are building anything that is needed, there is probably little room for this approach because the world will keep changing and it will continue to change at such a rate that you cannot write another book every time it does.
  • Ubuntu's Hardware Support

    When I bought a new computer over a year and a half ago, I was unpleasantly surprised to find that the then-present Ubuntu did not support my hardware in full. I have written here about some of the trials and tribulations I have had getting everything set up just so. Ultimately, I had to wait a couple of versions, but the most important thing I did to get everything running stably was to upgrade from the Ubuntu sanctioned kernel version of 2.6.28 to the newer 2.6.30. Many an article has been written about how I never should have had to do that or know that or have any concept of what a kernel was, how it differed from the operating system or desktop environment as a whole, or what version I needed. I will say upfront, that I agree. While I have a pretty good nuts 'n bolts knowledge of a Linux desktop, I should never need it to get up and running. The real problem here is less one of raw technical capability (since I was able to solve the problem with an upgrade), than it is the simple fact that most manufacturers give Linux no thought on the desktop. Windows would run a lot rougher if OEMs didn't work with Microsoft to ensure that it is otherwise. The idea of this all working, out of the box, with no OEM involvement is simply ridiculous. The only ones who can test a hardware configuration before it is released into the wild are the ones putting it together in the first place. Until OEMs start working with the major Linux distros (or, at least, the major distro of their choice), this problem will never entirely go away. Contrary to what many Linux advocates say, OEMs are not evil. Ultimately, they don't care what operating system people run, as long as the money winds up in their project. If Dell believed that an immediate adoption of Haiku (an OSS BeOS clone) would make them top dog, they would do it. Apple straddles a fine line between being a software company and a hardware company, but this is not so of HP/Compaq, Dell, Gateway, eMachines & company. They sell boxen. If changing OSes or supporting more OSes would mean more sales, they would do it. The only way, then, that the OEMs will ever support Linux will be when there are enough Linux desktop users that it is worthwhile in terms of simple supply and demand. The only way that Linux will make its inroads is if distro packagers make life as easy as is humanly possibly in the meanwhile. I have to assume that I am not the only one to purchase a machine that needed a bleeding edge setup to work properly. So, the only way to really service these users (like me!) is to make it easier to go bleeding edge when it is necessary. I understand the idea of sticking to a version of the kernel, like 2.6.28, for the duration of a release. It makes it a lot simpler to ensure that all of the software will work together. However, to accommodate those with newer rigs, the clear solution is to make it easier to go bleeding edge. It should not be something so trivial as clicking a check box on some preferences dialog, but it should be easier to use a later kernel with a given release. Fedora almost gets it right with Rawhide. By changing a simple option, you can go bleeding edge. However, it is less bleeding edge than it is like running off a random dev's test box. You never know if it will work the next morning. For the system components that directly support hardware, like the kernel and some of the low level daemons, I would recommend a special backports type of repository that is being updated alongside the new one. If there are hardware difficulties, make it easier for the user to use NDISwrapper (the best thing to ever happen to Linux wireless) and upgrade to later versions of the kernel without sitting atop of Linus's Git branch. It would not be perfect, but it would help a great deal because, as things stand now, you have to fight with Fedora or Ubuntu (or else do an inordinate amount of work) to use a version of the kernel that is not officially sanctioned.
  • R6+RS

    I have been following the R6RS and the R7RS discussion processes since shortly after the beginning of the former. It is educational, if nothing else, and I do enjoy watching the debates, though I have seldom posted to the group. As with virtually all engineering, most decisions are less matters of things that are strictly correct or strictly incorrect (read: wrong) than they are a discussion of tradeoffs. I have little doubt that there would be a lot less heat in these debates if more issues where strictly right or strictly wrong. It would then become less a question of design and more a question of solving the problem in the straight-out method used to solve problems in mathematics. All of those posting are extremely intelligent. This is not surprising. Given the state of the industry, few blub programmers ever make it so far as to hear about Scheme, let alone care about the next standard issued under that name. Most of these people have PhDs and are doing this as part of their research. So, I would summarize the R6+RS mailing list as being a lot of smart people arguing heatedly over design tradeoffs. At least, it keeps things from being boring. I find it interesting how dedicated these people are to sitting down and proving to the whole group that their way is obviously the best way. It may very well be, but if the majority of those standardizing Scheme do not want it, why worry? Why not take the R 4, 5, or 6 RS and draft your own spec, publishing it under your own name? Just take it and create another Lisp dialect. Show us all that it is better than Scheme or Common Lisp. It is almost like the languages world has decided that we shall have precisely two Lisps: Scheme and Common Lisp. Most of the Lisp-esque languages out there are starting from Scheme or Common Lisp and making some minimum number of tweaks (often, like Clojure, to make it run on some other platform) rather than designing a new language from the ground up. It seems to me that there is plenty of room for interesting experimentation. In fact, it seems to me that the standardization process would be a lot more fruitful if we could see a lot more Lisps out in the wild. We could take the good and avoid the bad and have a real, living model to look at instead of some airy discussions.
  • This is Awesome

    MonoDevelop, in version 2.0, has vi keybindings available. As a vi-addict, this makes me one happy camper. Especially because MonoDevelop is available on Windows, Mac, and Linux...
  • What Web Programming Gets Right

    There are a lot of things in the wonderful and wacky world of web programming that are simply wrong, browser specific hacks being the biggest by far. One thing it gets right compared to the majority of the desktop programming world is that setting up a GUI is declarative. Most .NET, Java, and C++ GUI libraries are procedural when it comes to setting up a UI. True, they usually use OOP techniques, but setting up a UI is a long string of statements that say, in effect, "create this widget, with properties X, Y, Z, A, C, and D and put it here". There are some things in place to try and take the pain out, but this is what is happening. For example, if you look at the code that the .NET form editor creates, you find stuff just like this. Don't even get me started on Java's Swing framework. WPF is a move towards a declarative way to do GUIs as is Glade, but these make up a comparably small amount of GUI code in the wild. For the most part, GUIs are still done in a highly procedural way. Programmers who come solely from a Java/C++/C# background (hereafter, referred to as the "Java way") accept this as more or less the way things must be done. This is because the Java way is only pseudo OOP in the first place and the meat of the work is done procedurally. The heavy lifting does not happen, ala Smalltalk, through signals propagating through a web of classes, but through a series of instructions in a method. However, the functional tradition (which includes languages that are less hard core than Haskell) does not view the world this way. Instead, in functional languages you seek, as much as possible, to be declarative in your code. This brings me to the greatest irony of all: despite having the best facilities to represent user interfaces, functional languages have few libraries for GUI work and are seldom used for this purpose. HTML has the right idea, and should show the way. is exactly how all GUIs should work. Hopefully, in the future, we will see more tools that declare GUIs instead of construct them.
  • ISO and ANSI are Irritating Me...

    Lately, as I have been swinging around various technologies I have been increasingly finding myself directed to various standards issued by the ISO. The latest ones (with reasons raging from curiosity to serious research) include ISO 8879, published in 1986 and specifying SGML the ancestor of HTML and XML, and ANSI/INCITS 319-1998 which specifies SmallTalk (this one I managed to find on the web in the form of its last draft). Now, maybe I am getting spoiled by the IETF, ECMA, and W3C, which release both their drafts and their final standards for free, but I find the prices that the ISO and ANSI charging are ridiculous. In fact, I would argue that if you have to pay for a digital copy of the standard, then the whole point of standardization has been bypassed. Standardization allows anyone to pick up a copy of the standard and implement what it says with the expectation that it should work with other implementations (hey, that's the theory; practice varies). Charging for paper copies is, of course, understandable and fair since it costs to put those together. When you charge for access to a standard, and especially when you charge a lot, it dramatically reduces the number of people able to attempt to implement it (slowing down the spread of standards-compliant versions of the technology). It also reduces the industry's ability to check behavior and see if it matches the specification. Let us say, for example, that the World Wide Web Consortium charged ballpark $1,000 to have a copy of the XML 1.0 specification. I am using some XML parsing library and it does not behave as I expect. If the cost for the spec is $1,000, I cannot afford it. There are many companies who will refuse to afford it. After all, how valuable can that specification be? How can I check whether I am correct or if the parser is correct? I can't. Moreover, many of the people chairing these boards are professors or researchers in whatever technology they are working on. So their salaries are basically paid by someone else anyway. The rest of the costs of running the organization should be minimal. Both of these organizations are non-profits, ANSI being a private non-profit and the ISO being an NGO, so why charge for the standards? They are not supposed to be making a profit anyway (I, personally, think that non-profits and our entire tax structure are insane, but that is, again, another blog post), so why not disseminate the standards more widely? This probably sounds like me deviating from my capitalist roots. It is not, really. That would be the case if I wanted some sort of governmental agency to handle standardizations (blasphemy!) or some sort of regulation enforcing the behavior I described above--but I believe nothing of the sort. The ANSI and ISO have every right in the world to try and make an industry out of technology standards, but I, as a consumer, have the right to try to refuse them. It all comes down to supply and demand and, I think, over the long haul the trend will go very much against the ISO and ANSI. I am sure some larger corporations are only too glad to whip out their wallets and pay up for these documents, but these are in the minority. Over time, I am certain that ISO and ANSI will be forced to shift towards the policies of the IETF, ECMA, and W3C. For example, when Microsoft decided to publish a standard for C# and the .NET framework, they did not go to ISO and ANSI. Instead, they went to the ECMA. Most likely, this was done for reasons of time and expense, but this too is what differentiates the smaller organizations from the ISO and ANSI. The demand will continue to move in this direction, for more efficient standardization, cheaper standardization, and wider availability of said standards. Ultimately, all standards organizations have two bodies of clients: the standardizers and those who would read the standards. If no one publish specs through your organization, the rest does not matter. Similarly, if you publish specs that no one reads you will be irrelevant. It is ultimately in the best interests of both parties to have things quick, easy, and cheap. Let us say that Microsoft did not want their spec to be widely available. Why would they get it published at all? It would be easier to just use it for you dev teams and never let it see the light of day. With the advent of the world wide web, no one needs a standardization comittee to make their work widely available. A few minutes and a few dollars and it is up for the world to see. Largely, what these committees offer is prestige, but inaccessible prestige fails to serve either clientele.
  • Twitter

    I separated out my blogs because I believed that most people interested in my programming opinions would not be interested in my personal writings and vis a versa. Every once in a while, there is a little bit of crossover. I just wrote a post on my personal blog on Twitter, which may be of interest to what little audience I may (but probably do not) have here: http://writing.mad-computer-scientist.com/blog/?p=131
  • The Days of the Cybersquatter are Limited

    Run any number of searches for domain names and you will find most of the truly good ones taken. This is a minor irritant when a legitimate organization or person of some kind is using it. What is annoying are the boatloads of domains that have been claimed by some person or organization who does nothing but put a page of banner ads for them--then attempts to sell said domain for copious quantities of cash. In short, these entities, known as cybersquatters, claim any domain name that they think someone might possibly want in hopes of cashing in big. Over time, I have heard various ideas for dealing with cybersquatters, most of which involve some regulatory agency (usually ICANN) stepping in. The US has already put some laws into place to help combat this. Really, ICANN, the US, India, and whoever else may take an interest in this should just forget it. The market will work this out and it has already begun to. The economics of cybersquatting rely on the given domain name being so important that some other person or entity feels they have no choice but to pay an exorbitant sum to acquire or reacquire the domain. This motivation is dying and with it will die the profits, real or imagined, that can be obtained through cybersquatting. For established businesses the motivation is, rightfully, a powerful one. People expect that if they go to ibm.com it will take them to the website of International Business Machines. It is simply too big a company to expect otherwise. Most of these establishments have already acquired the domains they wanted or needed. IBM will not likely lose a domain name any time soon. Even if there were once some large sums made, the big dogs are done playing this game. New businesses by and large cannot afford (or are unwilling to afford) the purchase of a squatted domain. Instead, the choice of business name is made alongside the search for a domain name. If a suitable domain cannot be had, people are moving towards choosing another name rather than pay what amounts to protection money. Squatters are, no doubt, are still attracted to this little get rich quick scheme  because money really has been made that way in the past. Those profits are dwindling and will continue to dwindle until only a few foolish people continue to attempt it. In that day, cybersquatting will be all but dead--without the help of any bumbling, meddling regulatory agency.
  • Disabling Author Info on Drupal

    You know those little headers that say "Authored by at 10:00"? So far, I haven't had a Drupal set up in which they were actually wanted. I googled it and the best suggestions I had come up with were to add display:none; to the info's class in a custom CSS file. Not bad, but it seems a little clunky. Someone on StackOverflow almost had it right when they suggested making changes to the settings of a specific theme. Of course, not every theme (including the ones I have been using) have this option. It turns out, though, that the settings can be set globally by going to Administer -> Site Building -> Themes, then clicking the "Configure" tab. Simply uncheck whichever node types you want to disable the author information for under "Display post information on". Much cleaner.
  • A Quick Rant

    For a language that is supposed to be web oriented, PHP really stinks for setting up web services. Take the default SOAP library. It let's you set up a request handler and populate it with methods, but it has no mechanism for automatically generating WSDL. What the heck? In ASP.NET, when I code up a class and mark methods as WebMethods, the WSDL is built automatically. With the default SOAP library, you have to provide a URI to the WSDL. In short, you have to use a 3rd party WSDL generator or write it by hand. Why in the heck would anyone want to do that? And even if you add a 3rd party tool, it adds one unnecessary step to the process if you make changes: you now have to regenerate the WSDL if you change the signature of a method, add a new method, or drop an existing one. NuSOAP is a little better, but come on. This is the default library. It is fine to consume web services, but who would ever want to write a full blown web service in this environment?
  • I didn't know you could do this...

    I was paging some code through less and accidentally hit the 'v' key, and it launched my editor on the file. Unfortunately, it doesn't work when the file is coming through stdin (though you could rerun the command and redirect the output, launching the editor afterwards). This would be easy to implement. Dump the input until EOF into a file in the /tmp directory, then launch the editor on it. I pulled open the man page and confirmed it. I guess this just falls under live and learn.
  • What it takes to take Google

    Since Google's meteoric rise, many self-proclaimed Google-killers have come along. Obviously, they were more smoke than flame. Google is bigger and badder than ever. The most recent is probably Cuil and Microsoft's reanointed Bing, but none of them has of yet made any meaningful dent in Google's size. The reason is simple. The search they offer is inferior to Google's. Google has diversified a great deal, but their bread and butter is search. If Google lost at the search front, their other applications would not sustain them. Sure, there is some advertising space in GMail as well as Google Earth, but virtually everyone who owns a PC has used and knows of Google search. A smaller percent use Google's more expansive offerings. So, to defeat Google one of two things must happen. Either Google has to be beaten on the search front or the internet itself as we know it must become irrelevant. The latter is hard to imagine, but, then again, the fall of the mainframe and the internet would have been hard to imagine a ways back. So, as things stand, to take down Google, our hypothetical company must defeat Google in the search arena. Again, Microsoft and Cuil are, so far, thinking along the same lines. The problem is that they are not really building a better search than Google is. Google is not invincible, here. With Googlebombing and Googlespamming on the rise, the signal to noise ratio in Google searches as dropped off noticeably. Any search engine that mirrors Google's algorithm will fall to the same problem. Research needs to go into what it takes to build a next generation search engine. In its essence, Google's algorithm takes some combination of popularity (i.e. links to the page) with the number of times that  the search words (with some fuzziness built in) occur. It is actually a very good little algorithm, but we are seeing its weakness. I propose that the next generation search engine (whether it is built by Google or someone else), hereafter titled NGSE, will have to be a little more intelligent. In fact, I would go so far as to say that this NGSE is little more than a massive artificial intelligence riding the back of the spiders that crawl the web now. Work in artificial neural networks (ANN) and pattern matching would be key in something like this. Rather than looking dumbly at what the page says its about and what other people say about it (and you'll notice that the problem with links leading in is that it does not indicate that the person linking actually liked the site; they might be linking to it to run it down), it tries to see if the page matches the pattern of the person searching. This sort of engine would be based on what you mean, not just what you say. It would take the semantic web to the world, but without requiring the world to adapt to it, as every proposal has so far. Case in point: Google Images. From firsthand experience, I can say that when I search for images there are almost always better matches for what I was looking for than what Google brought up. If I had to guess, I would say that the alt attributes on the img tags go a long way in determining the ranking. The caveat is obvious. Most web masters do not put alt attributes on all their images, even if they should. Imagine, instead, an ANN that could, with a high degree of success, scan the image and deduce what it is showing. The more precise it was, the better it would be at showing the users what they wanted to see. Whether or not an ANN/spider based search engine is even really feasible is an open question. Especially for the image matcher discussed above, nothing close has even been created. Even if we could build such an engine, would its computational cost be prohibitive to using it across the whole internet? After all, one of the keys to Google's success was their ability to paralellize their algorithm on the massive scales required. Ultimately, something like this would have to be built to defeat Google on their home turf. The way to win when the opposition has a massive advantage is to be significantly better. Parity just doesn't cut it and Google got where they are for a reason. Their methods are nothing if not sound.
  • On Cargo-Culting

    Like many a working programmer, I get to see the results of cargo cult programming a lot. To those of us who know better, it is evil, but I decided to sit down and write up a quick article on why it is evil. After all, the very reason that most cargo culters cargo cult is that they do not believe that it is evil. Here is my composite picture of a cargo cult programmer: our cargo-culter is Joe Cargo (I'm feeling creative today). Joe's interest in computers is mild. There is probably a fascinating story of how he got stuck in IT in the first place. Maybe it started out by setting up a wiki for a few friends. Or doing a quick and dirty website for a local ma and pa shop. Perhaps he worked at Megacorp, where the path to IT aid is a mile long trail of paper and he got conscripted by his real department to fill the gap in their IT resources, only to find his stopgap skills worth more than whatever it was he got hired for (as though anyone could remember). In any event, he never really moved beyond that point. He surfs the web and slaps together whatever kind of, almost, sort of, probably, if you don't look at it cockeyed works to complete the task at hand. He has no formal training and has never given any thought to what "best practices" would be. Manual, repetitive work is a way of life. He does not give it a second thought. Almost every other job in the world is based around repetitious labor, why should this be any different? Joe meanders from project to project and company to company always in the dark as to the real world of programming and computer science. If Joe ever meets a true practitioner of the craft, he would regard him as a wizard, dark and terrible, but useful. Joe Cargo is probably fairly proud of his work. Not excited by his craft, but satisfied with a job that he believes is well done. It runs, after all and there are a lot of lines packed into a lot of files. He probably has no clue that someone more skilled than he, let's call him Sam Sixpack, views the whole creation as the spawn of Satan. Sam Sixpack looks at Joe Cargo's work and sees unnormalized tables--and I don't mean the kind that should be 5NF. No, I mean the 250 column wide variety with repetitive data and would-be primary keys that are based on names that are not always consistent. He sees work that takes hours to run, rather than minutes, worst case. He sees code that has been copied and pasted all over the code base, rather than centralized in a function, class, module, or what have you. When Sam sees this, he groans at the hours it will take him to fix or update every single instance of that one block of code. Sam sees code that feels dirty, rather than clean or elegant. It lacks formatting, it rambles, it does unnecessary work. In general, it just does not make sense. If we assume that this is a reasonably accurate composite of most cargo culters, it is not too hard to examine it and pick out the hows and the whys. Why is easy. Cargo culting is the result of laziness. Larry Wall once wrote that one of the virtues of a programmer was laziness, but this is another kind of laziness. The laziness that Wall wrote about was a programmer who refused to do work that could be automated and, so, would put in extra work to save manual effort in the future. Cargo culting is based on a laziness, not of overall work, but of the mind. They cannot be bothered to think. They see something, but it would require straining the brain too much to understand it. If everyone took this approach to technology, we would still be pushing rocks around with our bare hands because no one would have seen the utility in investing in tools. What can those of us who care more do? Unfortunately, not much. Cargo culters got where they are through sheer sloth. If they could not be bothered to learn on their own, when computer books, articles, and resources are plentiful (or even just learn from the code they steal on a regular basis) there will not be much that you can teach them. Those who want to learn, learn more from having a knowledgeable person nearby. Those who do not, will not learn anything either way. In the final analysis, cargo culting is like any other form of laziness in business. It can only be handled by the person in question shaping up or shipping out. It is sad because it is bad for everyone, whether they realize it or not. Businesses get sloppy, second rate software. Users get a tool that is often the bane of their very waking existence. Next-gen coders get headaches from trying to clean up the mess sufficiently to keep their own jobs. Finally, the cargo culters themselves get a bad time of it. The fruits of this mudball building cannot be hidden indefinitely, even if the cause of it can. The cargo culter may be fired, or leave under increasing pressure to maintain the impossible. In any event, they do not get the best they could have, had they built something well. Their own skills (what few they have) decrease in value, since the culter does not learn (if they did, they would not remain cargo culters) and keep up with an ever changing industry. So, just remember, when you see a cargo culter, you see a history of everyone involved losing.
  • Stick a fork in Chandler

    I wrote not too long ago about my impressions of Chandler and its development after reading the book Dreaming in Code. Now, before I continue, I would like to point out that I understand that the open source world has, by and large, forgotten about Chandler. For good reason, too. It is the open source world's equivalent of Duke Nukem Forever--well funded, ambitious, and hyped vaporware. So, to some extent, the world has already stuck a fork in Chandler, but bear with me. The interest in Chandler may be minimal, but if you go to the OSF's web site the dream is clearly still alive. When I logged into my Gmail today and was marking some mailing list messages read, I noticed that there was a new option: "Add to Tasks". Hmm. Gmail now has an option whereby an e-mail can "become" a task. This sounds strangely like the Chandler concept of stamping, the goal of which was to "knock down the silos" dividing the different types of information. The dream that was behind that software is starting to leak out and spread. Pretty soon, they will have nothing to bring to the table, not even the vision that kept everything going. Gmail is rapidly heading, through evolutionary development, where Chandler has only dreamed of.
  • Philosophical Language

    I began reading In the Land of Invented Languages today after hearing about it on Lambda the Ultimate. Currently, I am reading about John Wilkinson's failed attempt (one of, apparently, many) to build a philosophical language. Like several readers at LtU, my mind turned to its application to programming. Like the noble readers of that blog, I feel that the correlation between constructed languages (from Elvish and Klingon to Esperanto) and programming languages is a strong one. The irony is that the latter has gained more traction than the former. Many constructed languages, like Wilkinson's, are based around the idea that ambiguity should be removed from language. In programming, it is not a matter of taste. Ambiguity must and, eventually, is removed. In complex languages like C++ (which, I assert, is complex in entirely the wrong way but that is a post for another time), it may be unclear from a spec how a feature should be implemented, but the implementors ultimately make some decision. So, we have dialects: Visual C++, GNU C++, Borland C++, etc ad nauseum. In human language, however, ambiguity is not neutral. It is actually a positive. Literature and poetry revel in the ambiguity of language, in puns and rhymes and all those stupid idiosyncrasies. John Wilkinson would probably have made one heck of a programmer. Arika Okrent, the author of In the Land of Invented Languages points out that Wilkinson's language was a great linguistic study and completely unusable as a spoken tongue. She is right. A language that is unfit for human speech is not necessarily worthless. As evidence, look at the myriad of computer languages available. These are all useful (well, almost all), but you would never catch me speaking to a person in C#, Java, PHP, Lisp, or what have you. The philosophical language is the kind of thing that computers love. Lacking in ambiguity, with new concepts as simple as placing a stub in a massive dictionary. The understanding comes almost for free. A great deal of effort has gone into trying to get machines to understand human language. At the current stage of development this is a lost cause. Hopefully it will not always be, but right now our combination of machine and algorithm cannot untangle the ambiguities of human speech. The example one of my computer science professors used was how a machine would figure out the meaning of the phrase "fruit flies like a banana". Is it that flies, fruit flies in particular, enjoy bananas? Or that fruits fly through the air as a banana would? The philosophical programming language might be the next step. True, it might be a little harder than picking up BASIC or PHP, but it would be a great deal more expressive. I know. This also sounds like it is approaching the heresy of building a DSL and expecting random business personnel to do their own programming. That's not really what I have in mind. The programming would still have to be done by programmers--but more of a dictation and less of a description. As I looked at the excerpts from Wilkinson's tables it reminded me strangely of Prolog predicate evaluation. It would be easy to represent his whole vocabulary as a sequence of facts in the opening of a Prolog program. With a nonambiguous grammar, the whole thing could be parsed, understood, and executed. To the best of my knowledge, this has never been tried. I would love to see a first shot at it, wrinkles and all. Give me a shout if you know of or are working on something like this.
  • Adding Some Color

    This may sound crazy, but the thing I miss the most about Gentoo is its nice, pretty out of the box terminal. I do a lot of work from a good old shell, no matter what the OS. Even on Windows, I often whip out cmd to do basic file management. It is just much quicker and more efficient.As with any environment you spend a lot of time in, it is nice if it is easy on the eyes. Today, I got utterly and truly sick of Ubuntu's no-color prompt so I busted Daniel Robbins article "Prompt Magic" and built the following prompt: PS1="${debian_chroot:+($debian_chroot)}\e[34;1m\u\e[0;1m@\e[32;1m\h:\e[0;1m\w\e[32;1m \$ \[\e[0m\]" export PS1 The Debian chroot business is lifted from the default /etc/profile. This generates the following: Imperfect, but much better.
  • We'll be back, after this brief commercial message...

    I think I have mentioned work on some websites, as opposed to super ultra mega cool compsci stuff. Well, one of them was relaunching http://www.saltmagazine.com/. I probably did the previous version six or seven years ago. While I was in programming, I had done no web programming--the site was poorly designed visually (now that was not entirely my fault) and all static HTML. Not even using Apache SSI, so the navigation had been copied and pasted to every page. That website had been one of my great shames. For this go around, we used a free template modified to show a SALT banner and ran it on top of the Drupal CMS with Ubercart and Authorize.net (die PayPal!). The off the shelf template isn't as cool as I might like, but it doesn't look cartoonish like the old one did. At any rate, it is nice to see an amateurish first attempt gone and replaced with a more professional approach. Anyway, I am writing this post for a slightly different reason. My sister's first novel (The Last Heir) is available for sale on the rechristened site along with three preview chapters. Read it, buy it. And now, back to Mad Computer Scientist...
  • A Century of Mad Computer Scientist

    A century of posts that is. Not counting this little blogging equivalent of a "FIRST!!!" post, WordPress says I have 107 posts up. Considering the often overcrowded life and lack of time, this isn't bad. I look forward to making another hundred. So, ONE-HUNDRED-AND-EIGHT!!!!!
  • Notes on Building Bespin

    Bespin is yet another cool project from Mozilla Labs. Ironically, Mozilla Labs seems to be geek through and through: they create stuff and it pretty much disappears. What software has come out of Mozilla Labs that really moved into mainstream use? Firefox does not count. It predates "the lab". The idea of Bespin is that you do your code editing online. Cool. Especially for web programmers (which is how I spend the majority of my development time, of late), as it means that you can work on the fly, anywhere that you have an internet connection. That caveat is becoming a smaller and smaller one as of late. With the dawn of netbooks and easy to find WiFi, the idea of coding without an internet connection is becoming the harder sell. You can already start using Bespin without the hassle of setting it up. Mozilla allows you to create an account and use their software as a service at http://bespin.mozilla.com/. This tends to make me uneasy and has the disadvantage of not allowing me access to the compilers and interpreters I love to play with. So, here is how you go about taking the latest source and running it yourself: Get the latest revision via Mercurial. Normally, I would suggest using a stable release, but in a package this young, no release is stable and the difference between release and head is much larger than usual. Use this command:hg clone http://hg.mozilla.org/labs/bespin/If you do not have, do not want, or cannot get Mercurial, navigate your browser (you do have that, don't you?) to the URL above. Mercurial's web interface will come up and you can simply download an archive (zip, gz, bzip2) of the latest revision. Bootstrap the setup. Bespin's backend is written in Python and uses a fair number of Python libraries. The easiest way to get all the dependencies at their correct versions is to install the libraries into a virtual environment--which is basically all the bootstrapper is for. To do this, enter the directory you grabbed above and run the command:python bootstrap.py --no-site-packages Bespin uses some very specific versions of two libraries: Path and Paste. Development revisions, as of this writing. To install the correct version of Path, run this command: bin/pip install http://pypi.python.org/packages/source/p/path.py/path-2.2.zip Next, run: bin/pip install ext/Paste-1.7.3dev-r7791.tar.gz to install paste. The wiki is out of date here as it gives the default location has lib instead of ext. To build a package that you can run on a server, run: bin/paver dist This will spit out build/BespinServer.tar.gz; copy this file (if necessary) to your web server and unpack. Next, we need to configure Apache (at least, I assume we are using Apache; I am, but I would assume you could run this anywhere you could run it under IIS as there is an ISAPI filter for it, though I do not know how good it is). Beyond this, I never got anything that worked. I installed the WSGI application, but got continuous complaints from it about missing files. I ran the development server out of the bespin source and it kind of worked, but I reached a couple of conclusions. Bespin is too finicky (dare I say buggy?) for anyone to really deploy outside of its creators and that I didn't really want to deploy it. Ultimately, it felt like jEdit on the web. Now, jEdit is a fine editor, but I am too wired into the vi mindset to swap for a jEdit wannabe. At any rate, I can see some real potential here and I hope they do well. It's just not for me. Finally, I am posting these notes to help anyone else who may want to give the whole thing a whirl. Sources https://wiki.mozilla.org/Labs/Bespin/ProductionDeployment
  • Some thoughts on Linux Gaming

    As my previous post may indicate, I've been interested in setting up a few games on my Linux box. As I have been reading articles and browsing around, it seems that many of the highest quality games are not in official repositories and require adding new repositories (like UFO:AI) or compiling from source (like FreeOrion). There aren't many high quality games available for Linux, but it seems like the official repositories do not have many of them available by default. When gaming is frequently cited as a reason not to leave Windows, this seems crazy. Heck, I've been using Linux on the Desktop (usually in a dual boot environment) for a few years now and I didn't know about many of these projects until fairly recently. What would the more newbie-ish users see? Well, they would fire up Synaptic or KPackageManager or graphical YaST, or something like that and see no decent games. A few card games, maybe. The kind of thing that ships with Windows by default. They would google games and see only stuff that runs under Windows. Distros that want to capture the desktop audience (I'm looking at you, Ubuntu and Fedora) need to get on the ball.
  • An Awesome Tool

    I was having some issues with the ordinarily awesome KCacheGrind (it kept crashing, so I couldn't do anything with it), so I decided to look for another profiling tool. I finally found and installed Webgrind (http://code.google.com/p/webgrind/) and I must say that it is excellent. It appears that, after installing, you must ensure that Xdebug spits out its profiling data with a certain naming convention, or Webgrind will fail to pick it up. This is actually kind of odd, since there is a configuration option in Webgrind which is supposed to tell it what the naming convention is going to be. I could really see Apache/MySQL/PHP/Bespin/Webgrind as being a really cool environment to set up for distributed coding. But, that is a discussion for another day.
  • Drupals, CMSes, and Thoughts

    I have been setting up a couple of Drupal sites of late. Side work. The kind of thing that doesn't write papers or change the world, but puts a little money in my wallet. For one of these, I have been working on a fairly deep customization of one aspect of Drupal (I haven't decided whether I will write it up or not) and, in the process, I came across this blog entry: CMS battle: Drupal vs Joomla vs Custom Programming, on a blog named Paranoid Engineering (I sympathize with the title, if nothing else). The article itself was a fairly simple comparison of Drupal and Joomla which are, almost certainly, the biggest open source content management systems. The table of features head to head is not terribly interesting. For the most part, the two have feature parity and, where they don't, you can extend them. The first thing I found interesting were the author's general opinions of the two: "After test-driving them both I've came to these conclusions: Joomla is bloated, Drupal is minimal Drupal is easy to use and intuitive, Joomla is confusing That was more than enough for a minimalist like me." Oddly enough, those were my thoughts exactly when I used them. I got a chance on a contracting gig a ways back to use Joomla and I found its hierarchy of concepts to be confusing. When confronted with something confusing, there are two possibilities: It is something knew, beyond my current knowledge and learning it will be an eye-opening experience that will teach me something, even if I don't use it or don't like it. At least, my mind will have been stretched. It is garbage Unfortunately, I felt that Joomla, by and large, fell under #2. I did not feel that I learned some grand cosmic ideas once I got past that initial confusion. I believed that someone had erected a large number of unhelpful, artificial barriers. Drupal (which I learned later), divides content into blocks, which you can assemble on pages. That is it, in a nutshell. There was a lot more classification in Joomla. Moreover, I definitely fall in the minimalist camp. I would rather take something small and build it up into what I need than take something large and try to strip it down to what it should be. I used to be a Gentooer (and, if I could take a few weeks to set up my machine, I might very well be again; at this season of life I simply do not have the time), but I found the initial time compiling inconvenient. So, I tried Sabayon. Well, Sabayon had the desktop and essential apps compiled, which meant that I could at least work while upgrading everything. The problem was that Sabayon had a lot more on it than I wanted, so I tried to take the fluff off--and I broke the machine. Badly. I was running Gentoo again soon after. The moral of the story being that it is easier to add on than take away. So again, I would tend to agree with the author. One commentator also remarked that they did not like Drupal because: "What keeps surprising me about Drupal, is that they still stick to an outdated procedural programming style. PHP is moving more and more to OOP. Trying to force a programming style on a platform that's moving in a completely different direction is a weird choice to make." Personally, I think this is a little harsh on Drupal. According to Wikipedia, the Drupal project began in 2001. Also, by Wikipedia, PHP 4 was released in May 2000. However, most servers are not running the latest release of a language (don't shoot the messenger, it's true) so writing in PHP 3-isms in late 2000 and early 2001 seems quite reasonable and, while PHP 4 had some OOP in it, I can tell you that it is second rate and actually pretty lousy. Frankly, if I had to write PHP 4, I wouldn't use its OOP. I know this because I have deployed code to servers running PHP 4 and had to backport the PHP 5 style classes. PHP 4 is just not a good environment for OOP, PHP 5 is okay and we'll see how PHP 6 does. Once your code is heavily procedural, fully OOPifying it would be a rather non trivial amount of work, if not a rewrite. And, of course, Joel on Software tells us not to rewrite. The comments were probably more educational for me than the post. The commentators mentioned a few alternatives to the usual Joomla/Drupal dichotomy. The three I noticed there were: Typo3 SilverStripe Modx Of the three, the only one I had heard of was Typo3, SilverStripe and Modx were new ones. Typo3 aims to be, in their own words, an enterprise level content management system. Modx, on the other hand, really caught my attention. It is not so much a CMS (although it has one) as it is a framework to build CMSes. The approach is interesting because with Drupal, Joomla, or Typo3 you take the base CMS and add a lot to it. You customize it. The idea of simply expediting a custom rolled solution certainly has some appeal over stack and restacking the blocks/modules/plug-ins/add-ons (or whatever the heck else they're called). I didn't spend much time looking at SilverStripe, but it does appear to be a very well polished CMS. If all this sounds like a ramble, well, it is. It is a rambling exploration of a series of things I stumbled across. If anyone has any thoughts regarding the CMSes above or one that I have omitted, I'd love to hear about it/them.
  • QuickTip: Changing Linux Timezones

    Run: ln -sf /usr/share/zoneinfo/US/Mountain /etc/localtime Assuming, of course that US mountain time is what you want. Substitute the right country and zone. I read a couple of people saying that you have to logout to see it. You may have to--I didn't.
  • Running Smokin' Guns on Ubuntu 9.04

    I first read about Smokin' Guns (http://smokin-guns.net) on one of several reviews of the state of Linux gaming. First, download the binary zip. I found it easiest to go to the quake.fr mirror (http://www.quake3.fr/index.php?f_id_contenu=1150&f_id_type=) for the download as it is one of the few mirrors that does not require registration (die, FilePlanet!). Once you unpack the download, you will find that the Windows and Linux binaries are packaged together. You will need to run: chmod +x smokinguns.x86 To make it executable. Once I did this, I got errors about OpenGL and OpenAL not being found, despite both being installed. The bottom post of the linked page gave the first part of the solution. You have to sym link the libraries to the ones that Smokin' Guns expects. The commands from the post below are: sudo apt-get install libopenal1 sudo ln -s /usr/lib/libGL.so.1 /usr/lib/libGL.so sudo ln -s /usr/lib/libopenal.so.1 /usr/lib/libopenal.so.0 Once I did all this, the game would start, but no sound would come. Here were the relevant errors: ------ Initializing Sound ------ ALSA lib pulse.c:272:(pulse_connect) PulseAudio: Unable to connect: Connection refused ALSA lib pulse.c:272:(pulse_connect) PulseAudio: Unable to connect: Connection refused AL lib: alsa.c:344: Could not open playback device 'default': Connection refused Failed to open OpenAL device. Could not mmap dma buffer PROT_WRITE|PROT_READ trying mmap PROT_WRITE (with associated better compatibility / less performance code) /dev/dsp: Input/output error Could not mmap /dev/dsp Sound initialization failed. A little bit of trial and error showed that the pulseaudio server needed to be installed. Ubuntu 9.04 ships with the client, but not the server (why, given that a number of apps apparently need it?). So, running sudo apt-get install pulseaudio solved the problem. Interestingly, this incidentally fixed another problem I was having. Playing Flash video (or games) in Firefox would lock up sound system wide until Firefox was killed. Once the pulseaudio server was installed, the problem went away. I am going to give the game a shot (no pun intended) and see what I think of it. I just noticed that the game wasn't quite click 'n go for Linux users and wanted to make a note of my experiences. Sources http://ubuntuforums.org/showthread.php?t=1047232
  • I think I'm Developing a case of NIH

    Oh, I'm catching Not-Invented-Here Syndrome and catching it badly. I've been using OpenGoo and thinking about how easy it would be to write a better version from scratch. I was considering moving my feed reading to a web based application (Gregarius), and begun thinking about how easy it would be to pick up some RSS tools and build my own. I've played with Bespin and been thinking that I could do better. Yes, I know. It isn't that easy. I also know that I haven't got the time to maintain my own web office software, web development environment, and web based feed reader. It's how Sourceforge earned its nickname Sourceforget.  But, you see, if I just busted out my tools it wouldn't take that long...
  • Further Addendums

    Yet more on "Life is Good..":  This same hardware can be gotten up and running on Ubuntu 9.04 by manually downloading the debs for the 2.6.30 kernel from http://kernel.ubuntu.com/~kernel-ppa/mainline/v2.6.30/ . Things got a little hairy when I hit the nvidia drivers. The theory runs that DKMS should handle all those details automatically. For me, DKMS choked on the nvidia module. One reader on Launchpad suggested installing the drivers first, then the drivers. This didn't quite work when I tried it. I had to install the new kernel (probably for the source and headers), then run the updated drivers that the reader referenced, then reinstall the kernel (so that DKMS would pick up the new module and use it). I'm sure there's more rhyme and reason to it all than that, but this is what I had to do to get it running.
  • The Noisy Desktop

    Desktops as of late have been getting noisier and noisier. From cartoonish callouts (Windows and Gnome) to sliding dialogs (KDE), the Desktop as of late seems bound and determined to tell us things. I really wish it would just shut up. We have Windows telling us (and, especially in Vista, selling us) everything under the sun. KDE is telling me things about its Phonon backends. Who cares? Unlike the typical user (who is supposed to be the audience in usability), I actually know what Phonon is and what its backens are--and I STILL DON'T CARE! How much less would someone who doesn't know? The Desktop needs to be a little less interesting. Appealing and functional, yes, but it should not be trying to draw attention to itself. It should fade softly into the background unless I ask it for something. It should always be supporting, but never running the show. The Desktop should be seen and never heard.
  • Regarding "Dreaming in Code"

    I just finished reading "Dreaming in code" by Scott Rosenberg and I must say that it was an interesting read. Rosenberg writes as one who is clearly technically savvy, in the sense that he can get around his computer proficiently, but is a clear outsider to the field of software engineering. This externalness is actually refreshing, giving an interesting perspective as an outsider looking in. This is interesting precisely because it is rare: most of those outside of programming would rather look away than in. I get the feeling most would rather swim in raw sewage than in code. The book follows the journey of Mitch Kapor and his team in the early days of the Open Source Applications Foundation (OSAF) and the creation of its first application, Chandler. Along the way, Rosenberg makes user-friendly detours into programming history and concepts. The amateurism showed through in a few parts. For example, there was Rosenberg's description of object oriented programming. The description sounded like the author had a Java programmer whispering in one ear and a Smalltalk-school professor whispering in the other. The result is some text that isn't entirely wrong, but doesn't describe object oriented programming, either as it is in theory (Smalltalk and academia) or as it is in practice (Java). Ironically, Rosenberg should have picked up on this as he later quotes Alan Kay, the father of object oriented programming as saying "Actually I made up the term "object-oriented", and I can tell you I did not have C++ in mind." That should have given him the clue that he needed to probe a little deeper. Of particular interest were the chronic problems that Chandler had while getting up and running. The usual problems that software encounters were discussed. Things like an uncertain design, personnel changes, technology changes, the uncertainty of estimates, the fact that software is not infinitely malleable were all addressed as general problems for software slippage and even for some of Chandler's problems. Interestingly, I thought that the most important problem with Chandler's development (as portrayed in the book, which is my only real source on the matter) was the dream man himself: Mitch Kapor. It seems like an odd charge to bring, given some of Kapor's other successes, from the design of Lotus 1-2-3 to the chairing of the Mozilla board. Kapor seems like the kind of man who would know his way about software development. The problem seemed to lie in his style of leadership or, sometimes, his lack thereof. For example, there are several large sections show technical debates in which Kapor himself took a side role and basically waited to see what consensus would arise. Indeed, Kapor himself lauds the project's democratic style. Somehow, we programmers have an almost endless capacity for debate, even pointless debate. Most leaders or managers are too heavy handed, not letting good people get work done, but bogging them down in red tape and superfluous micromanagement. Kapor was too light handed. He did not step in to push the project along, but let it languish. Another problem you see (which may have even been acknowledged by one of the departed developers) was that during the timeframe given, Chandler refused to be either a Cathedral or a Bazaar. The book discusses Eric Raymond's famous essay and Kapor's interest in it. However, when OSAF began work on Chandler, the code was released, but the real work, especially design work, was taking place in a meeting room in California. Even after a wiki was added, the book mentions one of the team members entering notes from the meeting into the wiki. The problem with doing this is that it takes the weaknesses of both approaches: you neither have the protective wall of the Cathedral nor the open marketplace of the Bazaar. Ideally, Kapor would not have hired anyone up front, but would have sat down with Python and wxWidgets (the selected technologies) and banged out a simple version himself. Then, released it on the web in true open source mode. Then, if he felt it necessary to move the vision along, hired developers. The ironic thing is that, at the beginning, the project received a lot of Slashdot attention. It is entirely possible that, had Kapor taken this route, he could have had his dream and perhaps without hiring anyone or taking out a bunch of space. Overall, it was an interesting read there. Reading the history of a software project is kind of like reading folklore from the land of Geekdom. It was also of interest to see how the whole thing looked when watched and researched by an outsider to the field.
  • Using CLOS...

    Using CLOS to do Java-style OOP is like using a 10,000 watt flamethrower to take out a frog.
  • Grokking IIF

    Sometimes, the best way to integrate to applications is to give up on automatic interfacing and just dump out some common data that can be imported/exported as needed. Recently, an application I was working on reached this point with the QuickBooks API. So, I implemented the relevant exports from the primary application to IIF (Intuit Import Format) and I thought I would go ahead and post some notes on using it. IIF is a tab (hard tabs, not spaces) delimited format that is, in its evil heart, a hybrid approach between EDI and CSV. An IIF file basically comes down to two components, repeated endlessly (well, almost; the QuickBooks KB has some notes on the process and gives the transaction limit as being at 10,000): A specification section (which, as its ad-hoc name implies, specifies the format to be used) A data section In the specification section, we basically tell the importer what format is going to be used for the transaction. It is really just a list of fields and in which order they will occur. The data section, on the other hand, follows the template specified by the specification section, providing data in the order in which it was specified. So, to take apart one of the examples: !TRNS    TRNSID    TRNSTYPE    DATE    ACCNT    NAME    CLASS    AMOUNT    DOCNUM    MEMO    CLEAR    TOPRINT    ADDR5    DUEDATE    TERMS !SPL    SPLID    TRNSTYPE    DATE    ACCNT    NAME    CLASS    AMOUNT    DOCNUM    MEMO    CLEAR    QNTY    REIMBEXP    SERVICEDATE    OTHER2 !ENDTRNS TRNS        BILL    7/16/98    Accounts Payable    Bayshore Water        -59.25            N    N        8/15/98    Net 30 SPL        BILL    7/16/98    Utilities:Water            59.25            N        NOTHING    0/0/0 ENDTRNS For emphasis sake: this is a hard tab delimited format, so the spacing shown by this page is a little deceptive. Using C-style escapes (\t for tab, in case you were wondering), the file looks like: !TRNS\tTRNSID\tTRNSTYPE\tDATE\tACCNT\tNAME\tCLASS\tAMOUNT\tDOCNUM\tMEMO\tCLEAR\tTOPRINT\tADDR5\tDUEDATE\tTERMS !SPL\tSPLID\tTRNSTYPE\tDATE\tACCNT\tNAME\tCLASS\tAMOUNT\tDOCNUM\tMEMO\tCLEAR\tQNTY\tREIMBEXP\tSERVICEDATE\tOTHER2 !ENDTRNS\t\t\t\t\t\t\t\t\t\t\t\t\t\t TRNS\t\tBILL\t7/16/98\tAccounts Payable\tBayshore Water\t\t-59.25\t\t\tN\tN\t\t8/15/98\tNet 30 SPL\t\tBILL\t7/16/98\tUtilities:Water\t\t\t59.25\t\t\tN\t\tNOTHING\t0/0/0\t ENDTRNS\t\t\t\t\t\t\t\t\t\t\t\t\t\t Which is a lot denser, but also a lot more precise. Any line beginning with an exclamation point (!) is one of the specification lines. !TRNS    TRNSID    TRNSTYPE    DATE    ACCNT    NAME    CLASS    AMOUNT    DOCNUM    MEMO    CLEAR    TOPRINT    ADDR5    DUEDATE    TERMS So, this line is specifying that the data to be imported is a transaction (which includes bills, invoices, and several other items), which has the following fields in order: transaction ID (I've never used this in my imports; your mileage may vary, though), transaction type (BILL in the case above), date (which will be the bill date here; in other places it has slightly different meanings; check the docs), account (AP), name (name of the entity sending the bill; more on this in a moment), class, the total amount of the bill, document number (reference number, for a bill), memo, whether or not it has cleared, whether or not this bill needs to be printed, an address, due date and terms. A note on names: these names must match exactly what QuickBooks has on file. If it does not, the IIF importer will create the value automatically. So, if you want to import a bill from "Somecorp", but type it in "Somecorp, Ltd." a new vendor "Somecorp, Ltd." will be created with the bill. This applies to all name-based items in the file, making the IIF import a little tricky and fairly dangerous. Many entities in QuickBooks are hierarchichal, so if you want, for example, a class of "bar" which is a subclass of "foo", you would specify it as "foo:bar". Excluding quotation marks, with the colon, and no spaces between the colon and either "foo" or "bar". The source listed below links a zip file with information on the IIF format. It is sparse, but enough to get going. It has some example IIF files (including the one dissected above) and some HTML files specifying which fields are available and/or required for each type of data to import. It is also important to realize that IIF imports are officially deprecated, so be aware of this when writing your own importer/exporter. Sources http://support.quickbooks.intuit.com/support/pages/knowledgebasearticle/1003348
  • Life is Good!

    I have full suspend/resume working, with binary nvidia drivers, and the ath5k driver for my wireless! Running on Fedora 11 Rawhide (11.90, according to my new boot screens). My /etc/pm/config.d/suspend_modules includes a line unloading ath5k. Other than that, it all works. So, to summarize: I have an HP/Compaq Presario F756NR laptop, with an Atheros wireless card, an nvidia GeForce 7-series card (I forget the details) with an AMD Turion 64 X2 processor, with 2GB RAM. (Why do I have the feeling that, someday, I will read this article and feel like a dinosaur?) To get it all running beautifully, I: * Installed Fedora 11. * Added hpet=disable and pci=nomsi to the kernel options * Enabled the Rawhide repo * Updated everything * Added ath5k to the list of modules to be unloaded on suspend and reload on resume. And it works. Wireless. Accelerated graphics. Power management. WHOOHOOOO!!! ADDENDUM: Once I updated the kernel to 2.6.30, the pci=nomsi parameter had to be removed.
  • Gentoo is calling...

    I bit again. Despite warning myself otherwise, I did it again. I installed Fedora 11. Hibernation works most of the time, suspend works, but the computer fails to wake up. This happens with either the lousy nouveau driver (hey, how about using the manufacturer's drivers by default? or at least, making them easy to setup ala Ubuntu?) or the binary nvidia one. I've trolled the forums. Plenty of people with HP/Compaq machines report the same issue and many recommend cures--none of which has had the slightest effect for me. I could downgrade to Ubuntu 8.04--again, but I kind of like having slightly newer userland apps. But Gentoo sounds alluring. No, I don't really have the time to compile my system from scratch, but the nice thing about Gentoo was that I always figured stuff out. I didn't have this junk to put up with. If something broke, I got my hands dirty and fixed it. With Fedora and Ubuntu, that is a great deal harder. The tools want to make it easy, not transparent. I probably shouldn't, but it sounds good anyway. I am once again thinking that I might like a Mac after all...
  • Am I the only one who thinks...

    ...that having Google Desktop integrate into QuickBooks is a recipe for disaster?
  • Building Chrome on Linux

    After blogging about Chrome yesterday, I just read that Chrome will arrive for Linux soon. Google has posted a development version of Chrome for both Linux and Mac (http://news.cnet.com/8301-17939_109-10257538-2.html). Like any good hacker, after reading this piece of news, I moseyed on down to the dev site and pulled the latest and greatest code. It is a little odd (but not really surprising) that Google has its own set of wrapper scripts (depot_tools) that you need to use in order to get the code. Yeah, they have other purposes (code review, they say; I haven't checked it out), but it still seems odd to instruct people to: $ cd $CHROMIUM_ROOT $ gclient config http://src.chromium.org/svn/trunk/src $ gclient sync instead of svn co http://some-url.com/chromium/trunk chromium The latter is familiar. The former is not. But I digress. It took a while to pull down the full code base the tarball for which weighs in at an impressive 713MBm including platform specific code for all three platforms, sounds, textures, third party code, and tests. I built under Fedora 11 and had little difficulty. Here was the process I used (streamlined from the bumbling experimental style in which it was worked out): Installed prerequisites as per the wiki article. The only way in which I differed from their instructions, was that I removed all of the architecture extensions from the arguments. So, glib2-devel.i386 became glib2-devel. This was because my computer is running as an i586 architecture with the repos set up to go to the architecture. Dropping the extensions installed the proper packages. Installed depot_tools. I just downloaded it to a location in my home directory, unpacked, and added it to the path. Installed gyp. Google is in the process of migrating  the build over to gyp. Downloaded and unpacked the source tarball. Navigated to the chromium/src/build directory Ran gyp All.gyp Ran hammer This built a binary under chromium/src/sconsbuild/Debug. First impressions: In all fairness, this is an incomplete developmental release. So Google protests about a thousand times before you get it up and running. Flash doesn't work. Don't know if Java works. Overall, things seem to work fairly well though. The most obvious, annoying thing is having the whole window flash green when opening up a new tab. It is nice to have it to play with, though. Watching pages and pages of compiler messages flow through my terminal made me vaguely nostalgic for Gentoo, where every single application had to be installed in like manner. Fun project.
  • Some thoughts on Google Chrome

    I have been, once again, doing some more work in Windows Vista than usual as of late and so I took the opportunity to set up and use Google Chrome for most of my online activities. First, I would like to list my reservations, so that this does not turn into too much of a schmooze fest. What I do not like about Chrome: It is from Google. I love Google's products, which almost irks me at times, because I do not like the idea of a Google monopoly or lock-in any better than I do a Microsoft one. Microsoft was not always the evil empire that it is now and we may assume that, given enough time, Google will become truly evil. Privacy concerns. To some extent, this is related to the above, but there was actually an interesting bru-haha about this browser in particular. Please see the Ars Technica article regarding the Chrome Eula controversy. Interestingly, a friend and I noticed this and held off using the browser earlier. This is all on top of the history and key stroke tie-ins that Google has. It is not cross platform. One thing you have got to love about Firefox is that it is more or less the same on any operating system. You can count on it being present and on the vast majority of the add-ons working, as well. Chrome, however, is Windows only for the time being. No, Wine does not count. Now, for what I like. The good in Chrome comes in the UI, which is not surprising given that they used an existing rendering engine (I would have done the same; a new rendering kit is a great way of spending a lot of time and effort on creating a toolkit that, if it is of any use, is also incompatible with the rest of the world). All of Chrome has a smooth, easy appearance which is very pleasant on the eyes. Chrome feels fast. How much of this is the actual engine, I do not know, but it is a pleasant experience none the less. The search dialog is smaller and less intrusive than Firefox's or IE's dreadful popup box. Because of the way it tucks away into the upper corner, it stays out of the way better on average. The ability to use the address bar as either an address bar or a search engine is very, very nice. I like what they have done with Opera's speed dial page. the addition of a search engine and history are useful additions. I am glad to see them getting rid of the menubar. Firefox really ought to do this as well. In conclusion, Google's browser offers some nice UI improvements to the browser. It will be interesting to see how Firefox and Opera react. Safari is unlikely to take much, for good or ill. Apple marches to the beat of a different drum.
  • Playing Devil's Advocate

    In all the cool geek sites, you hear PHP put down as a matter of course. Its inferiority goes assumed by most of the audience. Every once in a while, you see someone decide to defend it, which leads to a nice old flame war, but that's about it. The irony I find is that few of PHP's defenders really do a good job defending the language. The extreme is this one guy, whose defense of PHP runs: "I like it. It does everything I've ever wanted it to do." Not much of a defense. Leaving Turing Completeness aside, the defense says nothing. Anyone can say that about any number of languages and you could never prove them right or wrong. If PHP can do everything you ever wanted to do, then at least half a dozen other languages can (C#, Java, Perl, Python, Ruby, C++). Why PHP? What you usually find with this kind of defender is that it was a language that they picked up at some point, for some reason and they do not know enough about other programming languages or even PHP itself to be able to give an adequate reason. The charges against PHP are that it is sloppy, inconsistent, encourages laziness (and not the Larry Wall variety), and lacks some of the cooler features of other languages. Guilty as charged. All of those things are true and have been rehashed many times. As a disclaimer, let me say that I agree with this point of view. For web programming, you are better off with Python than PHP. But, let me play devil's advocate for a moment. If I were going to argue for the use of PHP, what would I say? Well, I would begin by pointing out that PHP is fast. Maybe not out of this world, I funrolled my own loops, and wrote my web framework in assembly fast, but it is fast. PHP is also very easy to deploy. If you set up a Python WSGI application or a Rails application on Phusion Passenger, there is a fair amount to set up in the web server and get running. Not unbearably so, but PHP wins as, in most cases, you copy over the source and you are good to go. There is a ton of library code available and many fine, large applications (obligatory Drupal and WordPress references here) written in PHP. There is an abundance of cheap hosting that has PHP set up by default (Python is starting to gain traction as GoDaddy has started offering Python over CGI in their deluxe Linux packages), which, in many cases is important. Sure, if you are personally developing a large Enterprise application, it seems reasonable to expect someone to be able to configure a server for it. On the other hand, if you are running a small business and just want a CMS to run your web site on, you do not need anything more than cheap, low-level hosting. In this case, the business is not the developer and does not hire the developer, but if you are writing the software and want this small business owner to use it, than there is a definite advantage here. WordPress would probably never have caught on if it were not written in PHP because it would be too much work for a lot of people to set up and too much work for most hosting services to support. There are some notable, often-cited reasons to use PHP left out. For example, many have argued in PHP's favor because there are many developers which means that it should be easier to fill any new positions in the case of departures (an obvious concern for open source and closed source teams alike). The problem with this argument is that, while it is true that there are a very high number of PHP developers, the problem here is that a very high percentage of them simply are not any good. They picked up a bit of PHP and HTML, but were too lazy to learn enough to be truly excellent developers. The end result is that this is more a problem than it is a solution, because there is a lot more chaff to sift through in the hiring process. It certainly is not obvious that you are better off this way than, say, Lisp. There are few Lisp coders on the market, but, as a rule, they are better coders. The very bottom of the barrel have not even heard of Lisp and, if you try to get them interested, you encounter the Blub phenomenon: they cannot comprehend what a more powerful language would even look like.
  • Memo 2 Me

    Whenever setting up a Linux distro, especially a Debian derived one (like, I don't know, Ubuntu), always, always, always make sure that Sun's JDK is the selected Java implementation and NOT GCJ. Here is the quick 'n easy command in Ubuntu: sudo update-alternatives --config java Frankly, I think I feel a rant coming on. Maybe this has already been fixed in a later edition of Ubuntu (I wouldn't know, as I am still at 8.04 for power management; yes, this is both a hint and a gripe: I want PM to work at least as well on my machine in current releases as it does in the current one), but Sun's OpenJDK should be the default. It is almost 100% of the closed source JDK, with the last bit coming along soon, fully GPL'd (so no licensing complaints), and much better than GCJ for run of the mill Java applications.
  • ASDF:*central-registry* Made Easy

    Lisp has an ecosystem all its own that is well outside of the box. As a result, few package managers have many packages for it (Debian has more than usual and is still missing a great many common packages). Of course, intrepid Lispers are not put off by this at all and create their own system definition systems. The most famous of these being, of course, asdf and mk-systems. Asdf is only half of what apt is for Debian. It defines sytems and dependencies, but it has no way to automatically grab and install any of these systems. At some point, asdf-install was created. It was an attempt to supply the missing half of apt for the Lisp universe. The only problem is that it doesn't work all that well. (Note: Mudballs is an attempt to both round out asdf into a full apt-like system with both system definition and administration; as a young project, it will be interesting to see what happens). In the final analysis, what happens for most people (or me, at least) is that they find the libraries they need, download them to some specified destination and add the resulting directory to the asdf:*central-registry* in the startup lisp code for their interpreter so that a quick (asdf:operate 'asdf:load-op 'foo) Loads the system "foo". After delving into a bit of Lisp code, I quickly compiled a huge initialization file with many lines pushing each specific path onto the registry. A couple of days ago, I threw it all out and replaced it with a handful of lines that adds all of the directories beneath some predetermined directory to the registry. Ladies and Gentlemen, for your viewing pleasure, here is that code: (require 'asdf) (push #P"/home//lisp/cl-fad-0.6.2/" asdf:*central-registry*) (asdf:oos 'asdf:load-op 'cl-fad) (mapcar #'(lambda (path) (if (cl-fad:directory-pathname-p path) (push path asdf:*central-registry*))) (cl-fad:list-directory #P"/home//lisp/")) To sum up, you replace with your real username (or the whole path) with your username. What I do, is keep a directory called "lisp" (creatively enough) and download all of my development libraries into that directory. This code makes it so that that is all I have to do to add a library. Enjoy!
  • Playing with Prism

    This weekend, I decided to goof off with Mozilla Lab's Prism Project. It is the Mozilla Foundation's implementation of the Site Specific Browser (SSB) concept. I set up or downloaded applications for Napster, Pandora and Gmail. Out of the box, it has novelty value, but little of practicality uniqueness. In its current form, you fire up Prism, provide a URL, and create one or more shortcuts. when the link is clicked, it will launch in its own private browser session, without toolbars, extensions, or navigation keys by default. This is nothing that Internet Explorer hasn't been doing for a long time and even Firefox can do most of it (with the unfortunate exception of not starting an independant browser session). You can also set a custom icon on it so that it "Feels" a little more like a desktop application. Prism's wiki indicates that you can do more with it, like custom styling, but I haven't seen anyone actually use these capabilities. The bundles I tried (like Gmail) simply create the shortcut and use a custom icon. A great deal of the issue here is, no doubt, what Joel Spolsky calls the Chicken and the egg problem: to be interesting, Prism needs bundles, but no one is willing to create the bundles for Prism because Prism isn't interesting yet. If and when it catches on, the idea has potential. Gmail, for example, can run beautifully in the browser, but have nice tie-ins on the desktop, like showing a message count when it is in the system tray, start composing an e-mail when I click an e-mail link and allow drag and drop of files (particularly for attachments). So far, it has had one very pragmatic benefit. As many Linuxers are aware, Flash and sound are both known to be somewhat problematic. Having downgraded to Ubuntu 8.04, Flash locks up sound when run in Firefox which means that, to run a desktop sound application (like, say, Amarok) I have to kill Firefox to release the lock. This works and, with Firefox's session management, is almost liveable, but is quite annoying. Using Prism for the Flash sites (which, for me, are few) allows me to kill the Prism "application" and release the lock, leaving my browsing and work untouched. This problem seems to have been sorted out in later versions of Ubuntu--to which I will not upgrade until suspend/resume and hibernate/resume work.However, this benefit really just makes up for two shortcomings elsewhere: the abominable state of Linux sound and the fact that Firefox does not run with separate processes for each tab (ala Chrome). Mozilla's is not the only project attempting to work off of the SSB concept. A quick googling turns up many options, tough most are limited to either Windows or Mac OS X. The idea itself seems to be a bridge. From where and to where, it is difficult to say. We could be bridging from desktop applications to web applications. From web applications to RIA applications on the desktop. From web back to the desktop. The motivation probably varies by implementor. For Adobe, it is most probably a bridge from web applications to RIA. Once you are started with Air, it will probably be a lot simpler to get you over to move to Flex than it would be to simply jump directly from web programming to the hybrid world of Flex. Mozilla would rather see you use Google everything than see you use Microsoft products as this keeps the browser as the focus of your world. So, they want to create a bridge that eases the desktop oriented users (which is, for the most part, medium to older users, as the younger ones are used to doing everything over the web). to the web. So, to sum up, Prism has some potential, though right this minute it isn't the biggest whoop in the world. However, it can still be kind of nice for odds and ends.
  • Another MySQL Diversion...

    At the fine MySQL performance blog, the post http://www.mysqlperformanceblog.com/2009/05/21/mass-killing-of-mysql-connections/, the author gives a sequence of MySQL queries to generate and run a sequence of queries to kill database connections. Once again, I had made a comparable home-brew script and so, as another diversion, I will provide my version: #!/bin/bash for process in $(mysqladmin -uroot -ppassword processlist | awk "BEGIN { FS=\"|\" } \$4 ~ /$1/ { print \$2 }") do echo "Killing process $process" mysqladmin -uroot -ppassword kill $process done Where, naturally, one would substitute the correct login information. The security information aside, I like this method a bit better as it is less interactive (I simply run the script on a given host and it kills all connections from that host) and it does not create any temporary files to be cleaned up later. The caveat, of course, is that it kills all connections for a host--it does not allow for cherry picking.
  • syntax-rules in Common Lisp

    I was hacking some Scheme code (about which a post will be forthcoming) recently and got to thinking that syntax-rules was nice. Really, really, nice. For those unfamiliar with it, both Scheme and Common Lisp include support for macros, which are, basically, readtime code transformations. These transformations are not unlike those done with #define in C and C++, in that they happen before the code is ever run or compiled, but are much more powerful. Common Lisp macros are functions that accept a form has input and produce another form as output to be evaluated. Scheme macros, on the other hand, are usually defined with syntax-rules, which allows the input to be transformed according to rules. In this sense, the Scheme macro system is similar to Awk or XSLT. From all I can tell, the systems are equal in terms of raw power. However, there are cases when the Scheme version is simpler or more concise. So, I consulted google. It turns out that Dorai Sitaram has implemented Scheme macros in Common Lisp. It is an interesting little library, to say the least. I thought one comment the author made was odd, though: "The ellipsis is represented by `...' in Scheme. This Common Lisp implementation uses `***' instead, since `...' is disallowed by Common Lisp's identifier conventions." This is almost true. If you use ... directly at the top level, you will indeed get an error. However, the spec does provide for using odd characters in symbols using the | syntax. So, this fails: (let ((... 1)) ...) but (let ((|...| 1)) |...|) works. "But wait a minute," you may say, "|...| isn't the same as ...". Well, yes, it is. In the eq, eql, and equal senses of the word, ... is |...|. Perhaps it would be more precise to say that ... is the print name for both appearances of the form...which is what matters. The nice point for Scheme is that, of course, you can use either macro system with no extra dependencies (assuming, of course, R5+RS compliance). However, I cannot say which system I like better. syntax-rules can be kind of nice, particularly for comparably simple macros that involve symbols as keywords (i.e. something like (do ... with foo)), but for anything more complex, the rules get so tangled that you are probably better of doing things the Common Lisp way (which is, I'm sure, why Scheme includes support for low level macros like that).
  • Excel Dates -> Epoch Dates

    There are a lot of posts about converting Unix Epoch dates to Excel (which is a number representing the number of days since 1900-01-01 vs. the number of seconds since 1970-01-01) and a few on converting Excel dates to Epoch. The quick formula for converting an Excel date to an Epoch date is: (EXCELDATE - 25568) * 60 * 60 * 24 This is slightly modified from a comment to a post that I can't seem to find spur of the moment.
  • Old Friend...

    This week, I have been tasked with making some changes to code that was gnarly to write and has gone untouched in some time. I needed to backtrack and look at an older edition, so I fired up good old svn log. Snipped down, I saw these two lines: r154 | mmcdermott | 2009-05-18 16:04:22 -0500 (Mon, 18 May 2009) r117 | mmcdermott | 2008-05-21 15:42:22 -0500 (Wed, 21 May 2008) These were two consecutive checkins. That's right: this code had gone unaltered for almost exactly one year. Believe me, that is rare where I work. But what was nice about the whole thing was that I was able to look at that code and not call myself an idiot. Now that the dust had cleared, I want to rearrange a few pieces of code (and I'll probably take this chance to do it) to reduce the number of AJAX callbacks, but the code was readable, clean, and sane. As developers, we have all had those slap-my-forehead-what-was-I-thinking? moments when reviewing our old code. When I was in college, I spent one spring break (in addition to doing all of my semester projects), learning Scheme (this also being, I might add, my first encounter with any form of Lisp) and using it to write an application to normalize a table schema. I looked back at it later after I had learned more about Scheme and found the SRFIs and realized that I had basically reimplemented SRFI-1 and, since I was just learning the language, reimplemented it badly. So it is really nice to look at code that was written under the gun (as we will all have to write code, at some time or another) and say that it was a legitimately good job.
  • All I really want in a web framework...

    As of late, there has been a great deal of attention paid to web frameworks and these huge time savings that are supposed to be based on the framework selected. Ruby on Rails is, perhaps, the most famous and the one with the most devoted base. I played with Rails for a bit and the honest truth was that I liked the Ruby part a lot better than the Rails part. Rails tried to do a lot, to "help"--and I didn't like it. I have written similar dislikes for CakePHP which is, by its own admission, an attempt to bring Rails to PHP. Cake is not the only attempt to give PHP users the bliss of Rails. There is also PHP on Trax and Akelos and heaven knows how many others. It's not just the PHP boys hopping at the Rails hype. The BBC created a Perl on Rails. Heck, on cliki I even found a project named Lisp on Lines that endeavors to provide a "Rails-like" environment in Common Lisp. ASP.NET is a little better. It doesn't try to help with ORM or a lot else, but I can't help but feel that it has a tendency to be a little stiff at times. Though it is definitely one of the better frameworks out there (and, yes, it is only a framework; contrary to all-too-popular belief, you can write ASP.NET in any .NET language). What I am saying is that I, personally, want a minimalist framework. I want to be able to assemble a toolkit based on the project at hand, not what is encouraged by the framework. I don't want to write a plugin to be able to use a bit of JavaScript effectively. Here is what a web framework should provide: Spackling over request details. I need the contents of the GET and the POST, but I sure as heck don't want to write a GET and POST parser for every application. Preferably, this would also spackle over the exact source of the request. When writing an application, I don't care whether the request came over CGI, FastCGI, an IIS filter, or mod_lang. I want to the variables and that's it. A templating engine. A simple, minimalist templating engine. Nothing too fancy and, if I so desire, I should be able to ignore it with impunity or even substitute another. This should be one of the most loosely coupled pieces of the framework. As time goes on, I also expect a minimum of widgets and gadgets to be provided as we get an increasingly large amount of sites and apps whose UI is AJAX/Flash/Insert-RIA-here driven. I really like Smarty for PHP work (which has a plugin API, but not controls in the ASP.NET sense). ASP.NET's templating engine is good, but its server-side controls are biased towards a method of development that is either outdated (controls rendered and handled serverside) or opaque (how does that callback work, again?). Pretty URL support. Yes, there is mod_rewrite and friends, but it makes things much cleaner to have only a couple of mod_rewrite rules and handle the rest application side. Session management. HTTP is stateless, but web applications, as we all know, are not. Session management is not difficult so much as it is an annoyance to write with every app. Data access should not, as a rule, be involved. Particularly, data access in which it is difficult to perform raw queries against the database. Generic, all encompassing data access layers are hard to use because they make a number of assumptions (in the name of simplicity) that simply do not hold true for everything. Worst of all are the schema assumptions that frameworks like Cake make. Django is the exception to the rule, though. Their DAL is excellent. You write an object model that inherits from some base classes. This is used as a very thin layer and you can still easily (yes, easily is the key word; I know you can write raw queries in Cake and Rails, but they try to hide it as much as possible) drop down into plain old SQL. This way, the developer is not boxed in by such things as the dictionary included in the framework. One time, I wrote an app in CakePHP and I used a term for an entity, I cannot remember what, that had a plural not recognized by Cake; now Cake requires that all tables have plural names. The plural for this word was not recognized, even though it was the correct English usage. This caused the DAL to go haywire and cost me an hour or two of debugging. The overwhelming philosophy is to provide basic support for that which needs to be done in the act of serving up a web application and leave everything else to the user. The new wave of frameworks tries to do as much as possible for the user. In this sense, a framework is like any other tool or any other application: when you try to do the user's work for them and are overzealous about making things "easy", you get underfoot and make things difficult. RoR has made the "blog in 10 minutes" example almost mandatory for a new framework. Well, it's great that you can throw together a simple blog in 10 minutes, but who wants to? Therein lies yet another rub: RoR and its kin are tuned towards mass producing almost-clones of certain types of applications. I, for one, do not like writing lots of almost-clones. I'd rather download Drupal or any one of a thousand other open source packages and configure that than write a custom almost-clone. What I want in a web framework is small. So, can I have that framework, yet?
  • Debugging Ubuntu

    As always, it seems, the ubgrade to the latest and greatest Ubuntu means fixing a couple of things. At one point, this meant wireless and power management, but times are improving! Now it's just power management. I have taken to using ndiswrapper for my wireless cards and find it a lot easier than trying to hack the open source ones to kind of, maybe work. To me, that's a biggie because my main PC is a laptop. A laptop that I carry, open, use for a while, then close and put away. Rinse, rewash, etc. The point is, that I often work on the go and expect to be able to flip open my laptop and pick up where I left off for a quick few minutes of work or play. I found two problems on my Compaq Presario F700 laptop on Ubuntu 9.04: suspend and a clock error. In the first, suspend worked beautifully--but only on AC. If I attempted to either suspend or resume while on battery power, the whole thing came crashing down. If, on the other hand, I suspend while on AC, unplugged the computer, ported it around, then plugged it up and resumed, everything was fine. The second issue was similar: only on batter power, I would get sundry kernel messages relating to the clocksource (usually, "clocksource tsc is unstable"). The latter I fixed by adding the option hpet=disbled to my kernel options in grub. The former, I never did figure out and downgraded to Ubuntu 8.04. Suspend works there with the hints described here: https://wiki.ubuntu.com/NvidiaLaptopBinaryDriverSuspend , minus the Xorg configuration. Moments like this almost make me want a Mac...
  • Cygwin Gripe

    In my last post, I made a passing reference to an annoyance with Cygwin's setup.exe program. First of all, though, I want to say that, by and large, I love Cygwin. It makes life livable in Windows-land. The ability to run a full Linux-esque command line environment alongside the rest of Windowsland is absolutely wonderful. But I have a couple of gripes (the other one being that man pages load slowly; come on, it's just some possibly compressed text being filtered through some troff macros and piped to a pager; what's the deal?). The most important being the setup.exe utility. Cygwin uses one application as both its setup utility and its package manager. On the surface this has some nice symmetry and you only have to develop one app. Win, right? Well, not if you have to use it semi-regularly. The biggest thing is that you MUST go through the menus selecting your download path, your line encodings, your mirror, and redownload setup.bz2. The repetitiveness is slow. Then you get to the package list and they have this tree structure that you have to muck around in. Unless you go ahead and set view to full and check "Keep" (because, if I want one package, I don't want to upgrade every package in the system while I am at it). Then I have to scroll through a mile long list (the problem that the tree structure exacerbates) and find the package. Why?! This little ritual is time consuming but, more to the point, it is outrageously annoying. I only actually switched line encodings on a Cygwin install once (a few years ago)--and it was by accident and promptly jacked up X (it turns out, on Cygwin, that changing line encoding throws X fonts completely out of whack; beautiful). Now, of course, I simply leave everything on Unix and use a translation utility when necessary, which isn't that often as the applications I would run on Windows text files (most especially vim) handle the line encoding difference automatically. But I have to click through the option EVERY TIME! What would be really, really nice is having the setup.exe for those who want it, but providing an apt-get-like interface with all those options stashed, from install, in a conf file. That way, I could type something like: $ cyg-get install openssh It turns out, I'm not the only one to ever think this, either. Someone else wrote a cyg-apt in Python. Kudos to the author, but why does he have to do this anyway? And why isn't it an official part of the project?
  • ASDF-Install

    If you have seen any number of articles on this blog (which is highly doubtful), you realize that I often use this as my place to gripe about my favorite technologies (paradox intended). Today, I am here to gripe about asdf-install. For those not fortunate enough to know what I am talking about, asdf-install is a package manager running atop of asdf (a system definition facility for Common Lisp; think of it as being high order packages with lots of meta data and the ability to load files and dependencies semi-automatically). I have used it before in Linux environments, but today I tried it for the first time with clisp and Cygwin on (where else?) Windows. IT DOES NOT WORK!!!!!!!!!!!!!!!!! Okay, now that I have screamed in agony, I shall explain. First, it complained about gpg which was easily fixed with a trip to Cygwin's setup program (which is another complaint for another time). I kill clisp and try again. Now the cycle goes: I run (asdf-install:install 'some-package) asdf-install complains that the GPG keys do not match. This is, mind you, after I download the key chain off of common-lisp.net and import it. The keys SHOULD match because everything SHOULD be up to date. But no, it fails. Just to be clear, I have randomly tried a wide selection of packages on common-lisp.net. Everything seems fine, but gpg still insists it isn't. Perhaps I did something wrong. At any rate, this is all experimental. Let's just allow it and see what happens... asdf-install proceeds, then spits out this error: gzip: stdin: not in gzip format /bin/tar: Child returned status 1 /bin/tar: Exiting with failure status due to previous errors For everything everywhere. At this point, I would assume that I was either an idiot or just missed something about asdf-install in my quick view of the docs. But I have set this up and used it before. This issue, so far as I can tell, only happens on Cygwin. I then googled asdf-install cygwin and I find a myriad of complaints. Not identical to mine, with all kinds of advice from ranging from just don't use asdf-install to apply patches and so on and so forth. Needless to say, my end conclusion was that, while I could sit here and try to figure out what's wrong, it just isn't worth it. That's right. I copped out. But I still find the whole process of darcs getting a repository and then adding the location to my .clisprc.lisp file a little wearying. So, I whipped up this little shell script to make life easier: #!/bin/bash cd ~/lisp darcs get http://common-lisp.net/project/$1/darcs/$1/ echo "(push #P\"${HOME}/lisp/$1/\" asdf:*central-registry*)" >> ~/.clisprc.lisp Not as nice as asdf-install would have been. For example, it really only works with common-lisp.net (which is good, as that is the sourceforge of the lisp world: almost everything of importance is hosted there anyway) and does no dependency resolution. Finally, it is a little hackish, relying on common-lisp.net's typical project URL scheme.
  • Usability < Operability

    A theme it seems that I hear a lot is something to the effect that it doesn't matter how awesome the code/functionality is if it isn't "usable". This view seems, especially, to come from developers and designers who work in commercial, shrink wrapped software and, coming from this point of view, it is probably true. If the user can't glance through a few tabs and know how to do what needs to be done, then it might as well not be there. They probably won't talk to anyone or read a manual, unless they feel truly and absolutely desperate. However, if you work in corporate IT, this particular piece of the world gets turned on its head. Operability takes first place over usability and the reason is simple: if the users don't have what they want or need, they will come to IT and demand it. If it is already there, it will be demonstrated if it isn't, it will more than likely get added. Not of course, that this absolves internal programmers of responsibility. Far from it. I know that I have learned a great deal by probing deeper, especially with the question "what are you trying to accomplish?". It is just that, in this case, the very proximity of "the user" changes the whole dynamic. More than likely, the will tell you that this means too much work, or that they cannot do X. So, where do the lines lie? I do not know exactly. The point I am more interested in here is not where they lie, but that at some point, usability really is less important than operability. Indeed, this is probably more apt to be true in highly specialized software of any kind, whether that software is an in-house developed PHP package or some $3 million CAD program for designing microchips where investment in the software itself is a given and the users have to use the software or else they cannot do their jobs.
  • Regarding: Stop Using Ajax!

    I was just reading an article entitled "Stop Using Ajax!" on Opera's Dev site (Link: http://dev.opera.com/articles/view/stop-using-ajax/). The author's premise comes down to this: Ajax does not play well with screenreaders and other such accessibility devices and, since you don't need it anyway, don't use it! Yes, that glosses a few things over. He also says that Ajax is simply not mature and that, some day in the future, it will hopefully be mature enough to be accessible. The argument really all comes down to accessibility. Yes, he mentions usability, but he does so only briefly with the majority of the argument focusing on accessibility. The reason is simple: Ajax is a godsend for usability. It allows things to feel faster by not reloading the page (and, of course, perception is 9/10 of the game for end user applications) and allows for usage paradigms that are more like those found in desktop applications (which are more familiar). Finally, it makes the whole process of whatever you're doing less ponderous if you can simply work on one page rather than clicking through a chain of them. As for accessibility, I certainly see the issue. Screenreaders really do not handle Ajax applications. They are, by and large, stuck in 1996. Though, even here, the argument has some flaws. He uses the photo-sharing site Flickr as his example of improving by not using Ajax. However, this example is specious as Flickr is one of the sites least likely to benefit from increased accessibility. If your vision is impaired (or absent) to the point of needing a screenreader, how likely is it that you will be able to look at photographs? Finally, harping on Ajax for this point is kind of ridiculous. Ajax works well for just about everything but automated tools like spiders and screenreaders. The correct conclusion would be to make a call on those who write these tools to work better with the web of 2009 rather than impairing the usability of the web for the 90+% of people who are actually glad to see web technologies improve. Furthermore, if we revert back to using older technologies, we then do not have people pushing the envelope and there is yet less reason for Ajax to "mature".
  • Massive Multiplayer Games and the Failure of Game AI

    Anyone who has not been under a gaming rock recently knows that games featuring light or non-existent AI, but many simultaneous human players have gained a rather large following. Games like World of Warcraft replace game AI with an extraordinary number of humans (ranging, of course, from terrible to astonishing in their skill level). I am happy for those that enjoy these games and their accompanying socialization. However, as a computer scientist (even if an "amateur" in the sense of not having a PhD to my name) I consider it a failure on behalf of the discipline. Few games have any true level of intelligence. Most are based on overwhelming the user (i.e. first person shooters where each AI nemesis is cannon fodder--but there is so much cannon fodder one can hardly keep up) or "cheating" (see any EA Sports Game; Jerry Rice becomes Mr. Butterfingers when the 4th Quarter grows too close to a finish; Ryan Leaf and his ilk become the greatest thing in the world. In short: balance simply vanishes), or "peeking" (the computer receiving more data about the user's tactical choices than is possible in reverse). True, these techniques make the game hard, but they lack the fulfillment of beating an opponent who is truly the player's peer rather than some half-omnipotent, half-omniscient being. Don't be fooled: I am not saying that good AI is easy. Much the opposite. It is, perhaps, the single hardest piece of game development. Graphics, at this point, are not the hard part. In fact, the biggest drawbacks aren't related to programming at all, but rather waiting for the available hardware to catch up to the software's capabilities. Perfect AI is simply not possible right now. "Good enough" AI usually is, but would require substantial effort to put together and why spend the effort? After all, we can simply boast about our ridiculous polygon counts and cool endorsements, right? MMORPGs simply punt on the issue. Yes, there are benefits to this style that cannot be achieved. The most prominent (and, perhaps, the only) one being socialization. Unless computers could actually pass themselves off as human, this part of the enjoyment is simply not replaceable. But let's be honest: to rival players, the world is so big that the number of people they truly socialize with is fairly small. As a rule, most humans playing are still little better than AI cannon fodder to the small band of friends playing. I rather wish they wouldn't pass off the problem, though. There are methods that are truly promising in games. Rather than encoding a handful of basic patterns, neural networks and machine learning offer some interesting possibilities. Who knows? Maybe some day I'll start up an Indie game studio with a dedication to AI. I doubt it though. Alas! It is more likely that this will remain the way of it for some time.
  • QuickBooks Frustrations

    These past couple of weeks I have been adding extensively to our internal app's QuickBooks integration, thereby giving me a chance to seethe a little bit more about the state of the SDK. Unlike my past article this is not a tutorial. This is a technical frustration vent. Querying the "database" is, itself, far too verbose an operation. If you look at my previous article on the subject, you will see a nice long explanation of how to connect to the company file, build a request, and execute it. Why does it have to take that much work just to get a little bit of data? Can you imagine if connecting to a real database were that hard on the rest of the .NET framework? There would be developer revolt. There is, frankly, no reason for it. Actually, yes, there is. The QBFC API is a very, very thin wrapper around qbXML. It shouldn't be. It should be a nice, fat layer that hides all that garbage a little better. It is better than XML, which is why I use it, but why couldn't it be a heck of a lot better instead of only a little better? First we have QuickBooks's speed in dealing with requests handed to it by external applications. It is slow. Oh, man, is it slow. I have used a separate test machine to run the QuickBooks testing (and now I am using a virtual box) and, even with nothing else really running, QuickBooks takes forever to do simple lists of bills received. Even if the final count is, say, ten or twenty bills it takes forever for the requests to be handled. For goodness sakes, what are they doing internally? Then we have the sample code in the documentation. It is bad. I don't mean, not "enterprise-y" or not awesome, I mean it is terrible. The basic workflow it uses to query QuickBooks and deal with the results is just poorly constructed. Moreover, it is wrong. Flat out wrong. For fun, take a look at the code to list out invoices, in particular the function WalkInvoiceRet. It takes in a single argument, an IInvoiceRetList named InvoiceRet, and proceeds to extract the data. That function would not even compile, making it clear that it was a quick copy 'n paste job. How do I know? The argument passed in is a list (RetList, get it?) BUT the following code treats it like a detail. To make this code work, you would have to add a spot somewhere that iterates over the elements in that list or (for demonstration) grabs the first one or something--but that C# code, when compiled, would throw type errors. Then we have other wonderfully intuitive behavior, like line items failing to come when you use the included data list--even when you set IncludeLineItems to true. Or the fact that the documentation says that, by default, line items are not included, when, in fact, they are. ARRRRRRRRRRGGGGGGGGGGGHHHHHHHHHH!!! All right. I feel better now.
  • Printing Labels with CUPS and Windows

    The place I work uses label printers extensively and several users needed to use the same set of printers. Moreover, all but one of these users' computers are laptops, meaning we cannot simply share the printer from a single user's computer or else the printer will be unavailable when the user takes their computer home. So, I grabbed an old desktop that was laying around and threw Debian on it, converting it into an ad-hoc print server. Our label printers are Zebras, for the most part, and they work beautifully on Linux. The hickups did not occur until I went to set up the printers on Windows. CUPS as you may or may not be aware, uses the IPP protocol by default to handle all printing operations, whereas Windows uses, by default (and as usual), their own homegrown protocol. Windows XP does support IPP, but the confusion begins with the drivers. Obviously, the CUPS print server must have the printer's drivers set up to print. However, if you install the equivalent drivers on Windows with the printer drivers set up on CUPS, confusion reigns. In the case of the Zebra I was setting up, I got a beautiful print out of my label in ZPL, Zebra's internal printer language. Various other permutations resulted in various other types of garbage. It turns out, what you need to do in a situation like this is to set up the printer in CUPS as normal. Then, on the Windows side, you set up the printer using the http:// address (not Samba, or any other option) of the form http://myserver:631/printers/myprinter. When you reach drivers, you use the MS Publisher Imagesetter driver under the brand Generic. This is Microsoft's PostScript driver (why doesn't PostScript or PS appear somewhere in that name?). So, what is happening is that the Windows spooler is accepting jobs, outputting them to PS. This PS is being sent to CUPS, which receives it over IPP, converts it into the native printer language (which may very well be PS), and everything is all happy and dandy.
  • Typographical Beauty in Programming Languages

    When programming language aficionados talk about the "beauty" of a language, they are referring to several properties of the language. Most typically, this is how smoothly and easily its syntax supports the lucid expression of ideas. In addition, there is usually some mention of how "dense" it looks. In short, if foo then B else C is much easier for a human to scan than, say, &#&A==>#BC or some other such nonsense. In short, it ought not to look like line noise. These kinds of beauty are not the ones that I am interested in here. I was goofing off and reading on set theory, in its many variations. When taking my abstract mathematics course (it had a different title, which escapes me; ironically, I do remember its numerical designation), I had no idea there were so many variants on set theory. For a computer geek, it is actually a rather pleasant read. Then again, I use abstractness and theoretical beauty (in computer science, mathematics, poetry, and literature) to purge out some of the filthy code I read in the course of the day, so my view cannot, perhaps, be trusted. As I was going through, I noted something: that mathematical notation is much more pleasant to read than code, typographically speaking. So, for example, this: [pmath size=14]A right B[/pmath] [pmath size=14]not A right B[/pmath] Looks prettier, to me, than the standard if..then..else format we have had going for many years now and infinitely better than the C-family's tertiary operator (A ? B : C). In addition, The reason for the formats we have is obvious: languages are designed with a United States English keyboard in mind (I'm sorry, internationalists, but let's be honest: there is a reason most languages will freak out if you insert an umlauted o in a symbol; is that good? No, but it is reality). Naturally, fishing through a key map looking for the unicode symbol for [pmath size=14][/pmath] would be a royal pain while coding. So that isn't the symbol used. We have !=, \= (in Haskell), and , but not [pmath size=14][/pmath]. It is easier to type, but it sure isn't as pretty. One would almost expect a few more languages dipping into the vast resources of Unicode and smart editors, but this does not seem to be the case. The only language I have heard of to date to use non-standard symbols as a part of the language is Fortress, a language I am looking at again for the first time since skimming an early draft of the spec in college, is the only one I know of that is headed in this direction. Fortress accomplishes this by comprehending both the unicode characters and some other ASCII variety (be it a word or alternate symbol). Another bridge lies with the editors we all know and love. Unless you are one of the three people who use nano, pico, or notepad for day to day coding, your editor allows (and probably comes bundled with) a mode of some variety for most common languages. Even Visual Studio lets you write plugins that could fill this void. So you could create a language that only understands [pmath size=14]x = 42 right superPenguins(x)[/pmath] and not x = 42 --> superPenguins(x) , but create an editor for your language (call it Foo), that changes x = 42 --> superPenguins(x) to [pmath]x = 42 right superPenguins(x)[/pmath] on the fly in a manner not unlike Visual Studio's autocomplete. The danger for a language like Fortress (allowing both), is that you will pretty much get the ASCII art version, rather than the mathematical one. How is this a danger? Well, it isn't in the strictest sense of the word, but it half defeats the purpose of putting the comprehension for Unicode symbols in there in the first place. So, why do I mention all of this? Well, I have been knocking a toy language around in my head, and have been considering taking the approach I outlined above to give it a nice extra boost. Naturally, if this is the only thing that a language has to offer, it need not ever be created. I definitely have some other ideas though, as I indicated before, the language is meant more for the fun (both implementation and use) than it is as "the next Python" or, even, "the next Haskell" (capturing the minds of academia nuts everywhere). I think I'll work on a preliminary spec and post accordingly. In conclusion, I believe that a more beautiful approach to programming language is long overdue. With Unicode and "smart" editors becoming almost ubiquitous, there is no reason not to. Fortress is, I think, the first step in this direction and I, for one, look forward to seeing a great deal more of it in the future.
  • WebFaction

    Well, I have MCS all moved over to WebFaction for a couple of weeks now and I must say that I am one happy customer. I have seen precious little criticism of them on the web and so I can't say I'm really surprised. Most of what I did see was related to their spartan admin interface. Well, it is spartan but after GoDaddy's ad and graphics laden trash that took two minutes to load on a fast connection, spartan is wonderful. In addition, it is the only shared service I have used where I could actually compile software on the machine. I just finished installing hugs (http://haskell.org/hugs/) to my home directory so that I would have something to play with while on the go and it worked beautifully. WebFaction: hosting by geeks is a wonderful thing.
  • Expressive PHP

    When I first began using PHP I found it, like many other languages, annoyingly inexpressive. Once you have used Lisp, Haskell, and friends, it is often hard to go back (as Paul Graham observes in the early chapters of On Lisp). Over time, I have begun to discover ways, many of which are documented in the PHP manual, but not in common online tutorials (which are, I would guess, the most common way people learn the language) to use PHP in ways that are at least a little more powerful than the standard procedural spaghetti-code that is traditional in PHPland. This post is about those methods. Note: This is for versions of PHP add support for closures and lambdas. First, pseudo-function passing. Interestingly enough, PHP has, for a long time, included a way to dynamically use or call functions in a way that is a mere shadow of lambdas and closures. Despite its limited scope and power, it is still better than nothing. You can dynamically call functions using the following syntax: function foo($a) { return strtoupper($a); } function baaz($a) { return strtolower($a); } $bar = 'foo'; echo $bar('t'); // will echo 'T' $bar = 'baaz'); echo $bar('T'); // will echo 't' Basically, PHP expands the variable into the name of the function before making a call. Again, these aren't first class functions. We are not dynamically creating them or even really passing them. It is more akin to C++ macros (though not quite, as these expand at runtime rather than compiletime) than lambda functions. Interestingly, PHP does not stop there. This expansion can go several layers. $a = 'foobar!'; $b = 'a'; echo $$b; // will echo 'foobar!' In this sense, the '$' sigil can almost be seen as 'dereferencing' things, after a fashion. The closest thing PHP has to "real" lambdas is, at present, the create_function function. It works by passing an argument list (as a string) and the function body (again, as a string) and returns a 'reference' to the function. This is, of course, more like the compiler hooks that some languages, like Lisp, offer than true lambdas with all-important closures. Use arrays like lists. This one does end up feeding off the one above, but is good to mention nonetheless. I can't say I like the choice of term PHP chose for their built-in sequence types. It isn't an array in the C sense. It is actually a hashtable--almost. It would really be a hashtable if anything (including things like objects and functions) could be keys, but instead we are limited to strings and numbers. In practice, this is close enough. Unlike languages like C++ and Java, PHP is not statically typed. This is why we can use arrays as though they were simple sequence types. When we combine this with the function calls as above, we get something nice. class Quuz { public $something; public function __construct($a) { $this->something = $a; } public function toString() { return (string)floatval($this->something / 2); } } function stringize($foo) { if (is_object($foo)) return $foo->toString(); else return (string)$foo; } $a = array(0 => 'aa', 1 => new Quuz(1)); $b = array_map('stringize', $a); // $b will equal array(0 => 'aa', 1 => '0.5') There is one kind of interesting problem I came across with this kind of thing before. You cannot use static methods in some versions of PHP. In conclusion, it is a shame that bad spaghetti code is so much the norm in the world of PHP. I suppose it is largely a result of the very thing that made it popular: letting novices get up and running quickly. This is, of course, a noble goal. The problem arises when novices suffer stunted growth, remaining forever the script kiddie that unleashed 30,000 line behemoths to run a simple little web site. I hope this article helps ease the pain of stiff PHP for someone out there. In time, I expect this article to be completely obsolete. PHP, like Python, Ruby, and C#, is showing itself a member of a general trend. Namely, that today's languages are starting to import the things we all know and love from the Lisp ecosystem (yeah, that's my term for Lisp, Haskell, OCaml, Standard ML, Scheme, etc. ad nauseum) and make them available to the working programmer. In the meantime, this is how I ease the pain.
  • Setting up Bridged Networks (in Linux) For Dummies

    I set up VirtualBox to do virtualize some odds and ends testing I was doing (specifically, with the intent of freeing up a laptop to be reissued) and encountered the wonderful problem of trying to bridge the connections to the network at large. With VMWare Server, this is delightfully trivial. The bridged connections are already set up and all you have to do is pick it off the list when setting up the virtual adapter. VirtualBox works the same way, but you have to set up the adapters yourself. So, here are my notes on doing it. At the system level, what you are doing is creating a bridge (which enslaves a hardware interface) and creating virtual adapters off of it. What you get is something like this: On Debian (and, by extension, Ubuntu) you can create the devices by editing the /etc/network/interfaces file and adding the following sections: auto br0 iface br0 inet dhcp bridge_ports all tap0 tap1
  • The Problem with Poker AI

    My wife and I watched Roger Moore in The Saint last night. The episode in question opened with Simon Templar sitting at a table playing poker with some underworld characters. He cleans out the head of these by bluffing his three eights (I think?) over the other man's three jacks. The opening camera work focused a great deal on the expressions of the players. Then it occurred to me: this is why, at the present level of technology, poker AI will never, really be feasible. At the highest level, poker isn't really about hands. There is fairly little one can do about that and, what can be done, is very strictly governed by the laws of probability. If it were, computers would whip the finest poker players right now. The game comes down to each player's ability to read the person across from him. When they're right they either win big or avoid losing. When they're wrong, they lose big. Computer's cannot read human expressions. They can have various heuristics built in and they can compute probabilities until it would make any of our (human) heads spin, but they cannot understand people beyond the most primitive tendency tracking. So they will fail, ultimately. Moreover, they cannot offer the player the fun that by and large comes with the game, of trying to see how well he does know the person across from him. Computer poker is about hands. Human poker is about people.
  • Blogging Code

    As you may have noticed, some of my snippets have not rendered too well on this website. I have tried a few things, including the fine highlight and pygmentize tools. However, the results have been...ugly. The Wordpress layout seems to have been intefered with by the classes and CSS as posted. I just installed the excellent Highlight Source Pro plugin for Wordpress and, after revising my recent PHP post, the results look quite promising. I will be doubling back and revising my past code snippets to make them much more readable.
  • Personal blog

    To date, I've kept mad-computer-scientist.com limited to professional and vocational elements. I figure that when I am on programming/computer science blogs, I want to read about programming/computer science. When I am looking at non-techie blogs, I am looking for non-technical information. With a few exceptions, the two worlds do not seem to mix all that readily. So, I have created a separate blog at http://writing.mad-computer-scientist.com/blog dedicated to personal, political, theological, and literary musings. If you're interested head on over and follow it for a bit. If you're not...well, that's why the blogs are separate.
  • Caveat to PHP array functions

    I was doing some work in PHP that, at least as I was approaching it, made use of many arrays. Having spent too much time in Lisp and company, I used functions like array_map regularly as the more expressive, concise, and elegant way to do my crunching. Several of these functions had their home within a class and were private static functions. My test box had the following version: PHP 5.2.6-2ubuntu4 with Suhosin-Patch 0.9.6.2 (cli) (built: Oct 14 2008 20:06:32) Copyright (c) 1997-2008 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies and the live server: PHP 5.2.0-8+etch13 (cli) (built: Oct 2 2008 08:26:18) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2006 Zend Technologies True the test box was slightly newer, but not so much that it usually matters. In review, the test box allowed code like the following: class Foo { private static function exampleUpper($a) { return strtoupper($a); }
  • Periodic Pulseaudio Failure

    The desktop I am on right now is running Ubuntu 8.10, complete with all of its PulseAudio glory (frankly, I wish they had waited until that glory was a little greater before making it the de facto sound system) and I had a problem that ran as follows: I would boot up the machine and sound would work beautifully for a while. After a couple days or so, sound would inexplicably stop working. I fired up the sound preferences applet, hit the magic "Test" button and, lo and behold!, it failed with a message containing the words: audioconvert ! audioresample ! gconfaudiosink: Failed to connect: Connection refused So, I would kill, restart, or start any sound service or process I could find, but sound did not return. A system reboot (something I am loathe to do) solved the problem--for a couple of days, then I was right back at the beginning. I did some googling this time around and ran the command $ pulseaudio -vv To see the output. There was a great deal there, but the key was this: Error opening PCM device hw:0: Device or resource busy Okay. So I googled it. After reading a couple of almost relevant posts, I opened up /etc/defaults/pulseaudio and changed PULSEAUDIO_SYSTEM_START=0 to PULSEAUDIO_SYSTEM_START=1 then ran $ sudo /etc/init.d/pulseaudio start and sound came alive again! What is this doing? Basically, it takes PulseAudio out of userland and puts it at the system level. The comments indicate that this is a bad idea because shared memory wouldn't work and "could potentially allow users to disconnect or redirect each others audio streams". Both arguments seem rather academic. I am sure the lack of shared memory access could be a performance hit, but having no sound is the ultimate performance hit (it consumes 100% of the resources with a performance of 0%!) and my untrained ear could hear no difference. The second one is relevant if you have multiple users on the same box who are running sound. This, of course, is not true on my desktop where I am the only user. Again, I understand the desire to close a security vulnerability but, personally, I would rather have sound and chance someone listening to the Derek Webb song I am listening to than have to reboot my computer every couple of days.
  • Vim tip...

    I have been wondering for a long time how to copy text out of gvim into the operating system/window manager's clipboard. Doing something unusual work today (most of my programming is a bash/screen/vim session; no gui until testing), I wondered again and finally decided to find out. If you copy to the + register (i.e. "+, then a yank command) it will be available on the system's clipboard. So now I know.
  • Why there will never be an open source VB6

    I was going through some code today and had to google some VB 6 related items. Both wikipedia and one of Jeff Atwood's old postings have links to a petition to plead with Microsoft to keep legacy VB alive and well. Needless to say, in hindsight, it failed. One thing occurred to me, though: legacy VB would make a perfect candidate for open source cloning. It would be comparatively simple (both the language and syntax). Unlike Samba, Wine, or Mono it would not be trying to hit a moving target as Microsoft is no longer actively developing it. With a COM Bridge and a reimplementation of forms (probably through GTK, Qt, or wxWidgets) legacy code could be made operational and most code, that which does not actively use COM, could be almost automatically cross-platform. Moreover, those business with a vested interest in legacy VB, i.e. those with a pile of legacy code, would still be able to use it and depend on it and those disenfranchised VB6 programmers would be able to use their favorite language for life. But it will never happen. The reason is quite simple. The people who actually mourn the loss of Visual Basic are not the ones, as a rule, who will be able or willing to write a compiler for it. This is not to say that there haven't been a few open source imitators or derivations from it. Gambas, for example, is VB-like, but it is no VB clone. There are, I am sure, others who have done work heavily influenced by Visual Basic--but no actual clones (at least, not any of any real size). The next place to look for a clone (either commercial or open source) would be the companies that have a vested interest in VB, that is, the ones with a huge pile of VB code lying around. Once again, these are not apt to undertake the project for the very reasons that made them choose Visual Basic in the first place. The business case for VB runs something like this: it has a mammoth company (Microsoft) backing it and supporting it, we can get full commercial support from this mammoth company and we know that they will be around for a while (i.e. they won't go bankrupt and leave us high and drie), and programmers for this language are as common as dirt (so we won't have to shell out a fortune to some obscure consultant after our lead developer quits). In short, it is a safe pick. Obviously, none of these reasons (I pass no judgment on whether or not they are well founded, only that they are the reasons) are conferred on a project or an odd-ball third party clone (which also runs a pretty high risk of getting sued). To summarize, the problem is not one of technical difficulty or logistic difficulty (i.e. keeping up with a constant stream of changes), but one of demographics. The very people who would most want this product are the ones least likely to carry out the work. Atwood points out in his article above that most VB developers are moving into C#. I think this is true. VB.NET is probably not the future of the .NET platform. Indeed, it has been a fairly second rate citizen in the .NET world. Read or skim through the ECMA document on the CLI environment. It is clearly modeled after C#, itself modeled after Java, not VB. To all appearances, VB.NET exists largely to ease legacy VB users and apps into C# and the .NET future--at least, that's the strategy. As for how it plays out, only time will tell.
  • Ubuntu 8.10

    ...is, without a doubt, the worse Ubuntu release I have ever used (understanding, of course, that the first release I used was 6.06 or so, and that on on an older machine; prior to that I was 100% Gentoo). Not having anything better to publish yet, but a lot on the burner, so stay tuned!, I figured I'd rant a minute. I run Ubuntu, primarily, on my Compaq Presario F700 laptop. I had everything working that I tried under Ubuntu 8.04. Wireless (thanks to my old friend ndiswrapper), accelerated video (how I love nvidia), the whole shebang. All I hadn't tried was the modem (why are there still modems on laptops, anyway? I can't remember the last time I used one) and the xD/MMC/SD card reader. I swore I wouldn't upgrade. Everything was working and working well. Then I got curious. Kubuntu even defaulted to 4.1, which I had heard so many good things about after the dreadful (or, perhaps, it would be more fair to say incomplete) 4.0 release. So, I dist-upgraded my system--then the world fell apart before my eyes. Suspending and hibernation broke on the spot, wireless followed, and before I knew it, I had broken the nvidia binary driver. I reinstalled. Shortly, I had nvidia back. It took some hassling, but eventually I got wireless back. But suspending and hibernation were broken. After more tinkering I have hibernation, but no suspend. After trying to deal with it for a bit, I have decided that I don't want to deal with it. I am downgrading to 8.04 and will roll on with life. Here is hoping that 9.04 is a better release than 8.10. More up to date software has been touted as the advantage of using Ubuntu over Debian. It is, in fact, an advantage--but it is also a pretty big disadvantage. Frankly, I would rather see one awesome Ubuntu release a year than having to retinker my laptop every six months. I am glad to see, though, that 8.10 finally uses the restricted drivers automatically, allowing the user to revert to a free-software only system if they want to. It makes getting up and running easier (I know that the first thing I do on my laptop is to enable restricted drivers) and it fits the needs of a much greater percentage of the users or potential users of Ubuntu. Postscript: My downgrade is more or less complete and everything works beautifully again.
  • Why isn't FTP Dead Yet?

    Today I was looking at setting up a git repository for a little utility (which I hope to release shortly) to share code with the big old world, and I found myself googling how to use git, and other distributed source control systems, over FTP and found myself asking "why?". The first half of the why is very simple. I want to share the repository when my killer app (I wish) gets released, but I would like to host it on mad-computer-scientist.com, rather than setting up a Google Code or Sourceforge page for it. It seems like rather too much work for a little app. This being a cheap, shared host that MCP is hosted on, the only file access is by FTP. So naturally, any source control (download or upload) will have to come over the FTP protocol. Which brings me to the second half of the why and the thrust of the whole post. Why, at this point, do we still not have a better, widespread method of exchanging files? FTP, by default, exchanges login credentials in plain text and is, therefore, quite insecure. Yes, there is SFTP and FTP over SSL, but the vast majority of FTP setups do not and would not use these measures. And, at any rate, they are mere spackling over cracks in a poor protocol. In active mode, the idea of a duplex data connection is problematic for modern NAT firewalls. It would be mean spirited and short sighted to declaim any of these faults as being "obviously" wrong. They may be now, but FTP dates back almost thirty years. It is an example of experimentation, both successful and not, and an example of a design that was outgrown by the demands placed on it. The dual socket design and plaintext authentication were not problems when the first RFCs were coming out. They were features. They made the protocol easy to implement and use (back in an era when the idea of a user actually entering raw protocol commands was not far fetched). Today, these things, and other like it, are a pain. A pain that has been hacked around to enable FTP to continue functioning in the 21st century, but a pain nonetheless. So, why don't we have a better file transfer protocol than the File Transfer Protocol? Here is what I would like to see in an NFTP (new file transfer protocol): Drop the whole idea of ASCII/Binary transfer mode. It's all bytes at the end. Use a MIME type, if necessary, to indicate what is being transferred. No more Active/Passive mode. Like HTTP, just have a request/response. Make the authentication process secure by design. No, this does not inherently solve all problems, but, at the minimum, mandate encryption for the authentication stage. A standard way of representing the file system hierarchy in general. I can't remember where, but I remember reading that parsing the file listing format was often a problem when implementing an FTP client because servers differed so much on how they returned the data. I'm sure that there are other things that could and should belong on this list. Maybe a protocol like this exists and I just don't know about it. Of course, someone will probably explain how I am an idiot for saying most of this and that's fine. I'm just rambling anyway. But even if we got this protocol tomorrow, it would matter little. FTP is everywhere, especially on cheap hosting servers. It would be quite a while before the majority of the world benefited. Just like it was a long time before anything besides PHP/MySQL was available on most shared hosting accounts, it will be a long time before anything other than vanilla FTP is offered in shared hosting. The answer to the title is simple: FTP exists because it is the lowest common denominator, which makes it too common to simply die.
  • PHP\namespaces\backwards_compatibility

    As may or may not be evident, my current professional programming is largely PHP for a transportation company. As such, it obviously behooves me to keep abreast of changes in the world of PHP. The next version of PHP, PHP 6, is set to come out with one major feature that I have personally wanted: namespaces. No language that is used for large scale development can really live without it. You can hack your way around it, but, in the end, you are using namespaces (or packages, or assemblies, or whatever the name du jour is), you are just using them poorly. A case in point is the ubiquitous mysql extension. Every function begins with the prefix mysql_. The reasons are obvious. If you just say query('select...') there is no guarantee that this will not clash with some user defined function. In addition, how many database systems wouldn't want to use this name? Which is the problem. The chance for conflict is simply too large. So, they define a namespace mysql where the ad-hoc implementation for namespaces simply involves tacking mysql_ in front of everything. To be clear: I do not blame the writers of the extension for this. It is a necessity given the current state of PHP. Even in my own code, I have seen a use for this. We have had to write implementations for EDI (ANSI X12) formats. Naturally, the terminology in the files themselves is similar and I usually find myself dividing a file into header, detail, and footer sections anyway. The names would naturally conflict. Usually, the formats do not get used within the same script, so it is not apt to be a problem. They sometimes do, however, and the chance is always there for it to be needed in the future. So, I tack the name of the format to the beginning of each class name Edi999Footer, for example. Creating a package Edi999 with a class Footer would seem much more natural and would head off any potential problem very nicely. So, in short, I was looking forward to namespaces. Then I found out what the currently selected operand is: \. Like a great many other people, I do not like this at all. While some were purportedly saying that they do not want their source files looking like Windows registry dumps, my reason is actually that the backslash is, almost universally, an escape character. I know that I will probably read new  Package\Nubia() as new Package ubia() while I am scanning through source files (not that these names exist; just a hypothetical). It is just wrong to use the escape character as a separator (on a tangent, it was wrong when MSDOS did it, too). That said, I understand the snag that they had hit. They really were running out of common characters and character combinations. That is, they were if the goal was to maintain near 100% compatibility with the previous syntaxes. To select an operand that would be considered more decent, one would have to break something in the current PHP language. A friend and I tried a little thought experiment on a blackboard. I wrote up a statement like the one above, but removed the separator. Then we tried to come up with one that wouldn't break the current language. We couldn't come up with anything that was really better.Ideas like ==| and ===> were as good as they got, but those are pretty lousy. However, the addition of classes (which I count as really happening in PHP 5; PHP 4 classes were little better than C structs) and namespaces is, itself, a break within PHP. A break from a dumbed down Perl with a very scriptish hackish feel to something that is more akin to the "professional" nature of Java or C#. In short, they are already breaking with the original spirit of PHP, so why not break a little from its syntax? I am not proposing to make PHP as strict as Java or C#. If that's the objective, we should just use Java or C#, but if the thrust of the language is going to be more for "professional" developers then why not modify the syntax accordingly?
  • JavaScript Date Wierdness

    If you go to docs.jquery.com and pull open the docs for the datepicker and navigate to the month of November in 2008. You will notice that November 2 occurs twice in a row. Navigate to 2009, November 1 occurs twice. In 2010, November 7 occurs twice. We are using jQuery's datepicker internally, so we had several users notice this for 2008, causing a colleague and I to dig into this problem. Also of note, is the fact that we tried this in Firefox 3, IE 7, Opera 9, and Safari and all of them produced the exact same results. After tracing through the jQuery date picker code, we came upon a line in the source (this was where the script was looping over the dates in the month, creating the table cells for each): printDate.setUTCDate(printDate.getUTCDate() + 1); When Nov 2 2008 was run through this code as printDate, it returned Nov 2 the first time and Nov 3 the second. Moreover, if you removed the expression and ran printDate.setUTCDate(3); the result was the same: Nov 2. In Firebug, we tried manually running the code and got the exact same result. Now, we found that changing this line (and all like it) to printDate.setDate(printDate.getDate() + 1); That is, using the regular date functions instead of the UTC versions, solved the problem and it solved it in all browsers. So, that leaves us with the question of why this was caused in the first place. It was nothing special to 2008, as it occurred in every year. At any rate, this solved our problem. I intend to do some additional poking through ECMA 262 to see if I can find the root of the problem, but I've got other fires to put out. We did find at least one other guy who was having a similar problem back in 2007. Addendum: We played with this some more. The ultimate problem likes in the fact that the codebase mixes the use of regular date functions and UTC functions. Uniforming it either way solves the problem. This almost, almost makes sense--but not really. I still want to spend some time with ECMA 262, a pad of paper, and a pen and see if I can figure out why this is true. Maybe someone who knows a little more than I do about the specifics of the JavaScript Date object implementation can shed some light on this.
  • Tech Joys

    Post hard drive failure, I am resetting up the white box I use as a home server (sure, the dirty old machine is well short of being a paragon of computing power, but it suffices for the simple purposes to which it is put). After installing FreeBSD on it (I had Gentoo before the failure--I like to keep abreast of various OSes) I am going through the ritual of compiling and installing the software I wanted through ports. As I watched pages of compilation messages and warnings earlier this evening I found that the whole thing was relaxing. Laugh if you will, but I actually enjoy building the software from source that way (that said, I lack a bit of the patience to go out and manually grab source for every library and package; we need computers to automate work, not add it). Now, unless I were doing some special performance tweaks for something mission critical, I couldn't justify building the server that way at work. Even with FreeBSD, which we don't use at the shop I'm at, I would feel much more compelled to simply install the binary packages and roll on. After all, if I am building a server we need a server built! But when I am working on my own, I usually want the result--but I can live on until it is done. So I can relax. Take the time to make sure that it is done right, to experiment and create a more interesting set up (which sometimes conflicts with the previous item), and just, in the software sense, stop and smell the roses. We spend so much time professionally trying to keep the trains moving that it is nice to be able to sit in the depot and watch them come in. A few months ago, I got off a binge of contracting work (most of it vestiges of prior work done while I was trying to stay alive with it between jobs). For a good while, I coded at work then did nothing on my computer at home. After so long of PHP/MySQL from dawn to dusk, I needed a break. Then, I started working on Latrunculi again. I continued the scarcely started subproject of moving it over to Common Lisp (which is now the version in trunk, showing just how far along it has come) and enjoyed refactoring the code, removing issues with the original implementation (some my stupidity, like an overreliance on global variables, and others a library's fault, like the wierd way that threading worked in Chicken), and picking up Common Lisp. I got to do three things that most of us do not really get to do at work: concentrate on perfection, exploratory programming (programming to explore and learn, rather than to merely crank out code for the next deadline), and use a slightly out-of-mainstream language. The first time I did this after a while, I almost literally got a rush of euphoria. That was how good it felt. The tech joys, for me, are the common elements between the two tasks: concentration on perfection, learning, and using alternate technologies. I could natter on here about how these things make you a better tech anyway (they probably do, but I think that most advocates fail to prove that it actually makes its practitioners better or whether those who are better at their trade are not more apt to also enjoy doing it off hours; there is probably a measure of truth to each), but I won't. I'll leave that to others. Instead, I'll just say that at times when the house is quiet, my wife and son asleep, and I am stealing a few minutes to bask in technical geekery, I thank God for giving us such wonderful toys for the human mind.
  • Catching MUMPS...

    The first time I had ever heard of MUMPS (also known as the M programming language) it was in an entry on the awesome Daily WTF. After the first pass through the article, I assumed, like several other people, that MUMPS was a cooked up name for a language whose identity was being protected. However, one very helpful commenter wrote that the language did, indeed, exist and had not been anonymized. Recently, the PLNews blog announced that there had been a new release of an Open Source MUMPS compiler/interpreter system. Curious, for reasons of archeology, if nothing else, I downloaded the source and compiled it. Then the fun part: I started playing with this relic. Mumps documentation is a little hard to find. I suppose that the mere fact that some can be found is, in itself, a small miracle. The only real guide (aside from buying a book; 40 bucks isn't a bad investment if you are doing this professionally, but for the casual tinkerer it seems sort of ridiculous) that I found was the quick and dirty manual that came packaged with the compiler itself. The language itself is not unlike BASIC (and, I assume, COBOL or RPG) in spirit. The familiar procedural idiom is highly visible when looking at snippets like > set a="foo" > write a foo and is mostly interpreted (the manual indicates that, according to the M language spec, it is impossible to create a fully compiled M program that includes all the relevant features). The most interesting feature of the language is none of its BASIC-esque glory, but the way it handles what are termed global arrays. Arrays in M/UMPS are, basically, multidimensioned hashtables. These themselves act like the arrays you would be used to if you have used PHP or Perl (as an interesting note along these lines, the $ sigil is basically the opposite of the one used in PHP, variables, or Perl's scalar variables; it is used to, instead, denote functions). Once one uses the ^ operator to access "global arrays", one is no longer simply working on the currently running script. Rather, this data is written directly to disk in a tree-like structure which is not unlike the representation used by modern database systems. In addition, basic set operators are included in the language. It is this whole idea that interests me. In many domains, this idea would be either nonsensical, but for most business apps that is all we really do. We read the database, do some processing, and write it back out. This model goes completely out the window for such programs as Latrunculi or Ocean that I am working on right now, neither of which use a database. However, as I indicated, virtually every business application on the planet has this core workflow, even if the operation is to chew up some data and spit out some pretty pictures. Where does this leave us? All programs are stored procedures running on the database (or, all stored procedures are programs at the main computer's level; however you want to see it). Now, there has been a great deal of talk about keeping all of the database layer in and of itself. This method of doing things seems to have, by and large, been passing in favor or using prepared statements in the main application code, eliminating the middle man of stored procedures. For better or worse, this makes the whole idea of separating a database intensive application (which is what every tracking, accounting, or point of sale system really is) more or less superflous. Alex of the Daily WTF even wrote an article on this subject. The reason why the line between the business rule layer and the data access is so blurry is because the line itself is really nonexistant. Business rules are little more than a description of what data we want to retrieve and what calculations we want to run. I wonder whether a language like M might not have a place in today's world. Replace the BASIC side with something more reminiscent of Python, Lisp, or Haskell and you might very well have a winning platform. Probably with a friendly ALGOL like syntax (making it more like JavaScript or ActionScript than any other language I have mentioned so far). Like I said, not applicable for everything in the wide old world, but a language like this would fulfill a wide need. Of course, in addition to bringing the main language itself up to date with such goodies as lambda expressions, the database backend itself would need to be overhauled to support finer grained querying, concurrency, clustering, and replication. Oh, well. All part of being a "programmer archaelogist".
  • What I hate about MySQL

    MySQL has gotten better as time has gone on. I want to make that clear up front before I bash a handful of things about its current state. It has gone from being little more than an SQL front end to flat files to being almost a real database (if you use InnoDB and friends). No full outer join; this one irritates me to no end when working on diff-type queries (ones that, like the diff utility in UNIX, take a set of rows and compare them against another set, getting a difference) because I have to union three queries together, rather than simply one query with a full outer join. Constraints and concurrency are not enforced by default. You have to setup InnoDB to make it work properly. There is simply no excuse for not maintaining relationships in a RELATIONAL DATABASE. InnoDB is great, don't get me wrong, but I should not have to setup an add-on (and to run properly, you will need to configure the engine, at least a little) to get something so basic and fundamental. With MySQL 6.0, Sun has promised an end to this with Falcon, but that has yet to happen. 6.0 isn't out yet, and it wouldn't be fit for production use for a while longer even if it were. Stored procedures. These were not added until version 5.0 of MySQL (many shops and shared hosts are still on 4!), but now they are here--sort of. The fact of the matter is that stored procedures really aren't usable in MySQL. The syntax is clumsy, requiring messing with delimiters to even create them. They do not work from the command line because of this, which makes testing harder. In addition, the syntax is lacking quite a bit featurewise. The easiest example of this is also what should be simplest: how do you iterate over a cursor? Simple 101 feature, right? Not really. At least, not in MySQL. Those are the biggest things I can think of off the top of my head. I've got a hunch that I would not be happy about replication or binary logging either if I had the time to set them up. Now for any of you reading this (if anyone does read this), you may ask: why not just use PostgreSQL? Or Oracle? Or even Microsoft SQL Server? The shop I am working in will not invest the sums for Oracle or MS SQL Server, so those are out. The current reality is that we will not be leaving MySQL any time soon.
  • QuickBooks and ASP.NET

    Let's start out with a scenario: we have a series of web apps for internal use (running on LAMP boxes) and we want the data to be pushed into QuickBooks semi-automagically. We wanted more magic and less semi, but the accountants wouldn't let us. At any rate, we wanted to keep the same setup. We eventually got things working, more or less, by running a web service in ASP.NET on IIS on the same machine as a Quickbooks instance and letting the PHP side talk to that through SOAP. It works pretty well on the whole, but getting QuickBooks to interoperate with ASP.NET turned out to be a pain. After Googling, trial, and error, here are the steps I took to get the libraries to work all happily: Install QBFC and qbXMLRP2; if you installed the SDK, the installers will exists on your hard drive. The filenames are: QBFC8_0Installer.exe (install with the command  QBFC8_0Installer.exe /v"ALLUSERS=1") qbXMLRP2e.exe (install with the command qbXMLRP2e.exe /RegServer) Go to Control Panel->Administrative Tools-> Component Services From there, navigate to Console Root->Component Services->Computers-> My Computer->DCOM Config Right click on qbXMLRp2e and select properties Here, you are going to grant permissions to the various users associated with an ASP.NET call so that the COM calls can be made Click 'Customize' in the 'Launch and Activation Settings' section Grant 'Local Launch' and 'Local Activation' permissions to the following users: Network Service ASPNET IUSR_MACHINENAME IWAM_MACHINENAME INTERACTIVE Where you need to substitute the name of your machine for 'MACHINENAME' Click 'Customize' in the 'Access Permissions' page To the same users above, grant 'Local Access' Finally, fire up QuickBooks and log in (preferably, with a user specially created for the purpose). The long and short of it is making sure that the correct permissions are assigned to the correct COM components. In our setup, we left QuickBooks running constantly on a dedicated Windows XP virtual machine. I am unsure whether there is a better way to handle this, but it does not really seem like there is. References http://idnforums.intuit.com/messageview.aspx?catid=7&threadid=9209 http://idnforums2.intuit.com/messageview.aspx?catid=7&threadid=9266&enterthread=y
  • Pizza Experience

    I usually keep this blog pretty much on-topic (the topic being a conglomeration of computer-related topics that I am interested in or working on), so I don't feel bad about wandering off once or twice. Last night, I had a pizza experience. My wife and I ordered a pizza and as we were eating it, I noticed one line printed on the side of the box: Your pizza experience was managed by Joshua Now, Joshua, should you, perchance, come across it the pizza was good. And I get the point. If I have a complaint, ask for Joshua. But that is one of the worst examples of PC sales talk I have heard in quite a while. My "pizza experience"? Are you talking about the food or the rumbling my stomach does all night after eating a few slices of it? What the heck is a "pizza experience"? And why, for pete's sake, does everything have to be an "experience"? From using Windows to eating pizza, marketoids intone about the "experience". Shut up about this ethereal "experience". Eating pizza is not achieving nirvana. Using the OS does not make me giddy (what it enables me to do can, but not manuevering the system itself). So, knock it off with the "experience" and just give me good pizza.
  • Ubuntu...Dell...Argh!

    I was going to install Ubuntu on my Dell Desktop (n-Series) at work. So, I burned a CD, fired it up, and...the setup crashed to BusyBox with errors about the ata device. I had seen this before. It had been a while, but I had seen it before. A few minutes of irritated googling later, came up with the following steps: boot to the live CD add these options to the kernel line (press F6 to get there): irqpoll pci=nomsi Because of my dual head setup, it was also easier to boot into safe graphics mode This allowed me to run setup and get Ubuntu installed...and I shouldn't even have to mess with it.
  • Joining a Debian box to an Active Directory Domain

    I've been building a few servers, as of late, at work. For our Windows workstations, we have an AD domain controller setup which, obviously, handles the authentication for each of those machines. For us, as for our users, it is nice to be able to use our normal logins for all of the server maintenance. So, I joined the boxes to the domain. Like so many things in the Linux world, this task is, ultimately, not hard and has been done by a gazillion people, most of whom have written on it to some degree or another. But, at the same time, the documentation that is received is almost always sketchy, dropping an "obvious" step or two and simply ploughing through. I found some good resources, but still ended up "patching" my directions to get everything working as it ought. Most of the directions came from the first reference below, the author of which seems to be a man after my own heart. However, I still had to do some tweaking. Note: all commands run as root. Anywhere where REALM is used, this is the full domain (i.e. myorg.local or myorg.net, not simply myorg). Anywhere DOMAIN is used, the short name is what it means (myorg, not myorg.local or myorg.net). pdc_ip_address is the IP address for the primary domain controller. Should be obvious, but let's follow the KISS principle, shall we? Install the software. Notice that, as opposed to in [1], I installed the package ntp not ntp-server apt-get install libkrb53 krb5-config samba winbind ntpdate ntp Stopping the services. sudo /etc/init.d/samba stop sudo /etc/init.d/winbind stop sudo /etc/init.d/ntp stop Kerberos needs to be able to do a reverse DNS lookup on the domain controller [1]. This caused me all sorts of problems. In our network, this simply wasn't happening automatically. Rather than try to figure out why, I added the domain controller to /etc/hosts and restarted the networking service. The downside to this, of course, is if for some reason (like, maybe, a network upgrade) the IP for the domain controller changed in /etc/hosts. Configure Kerberos as in [1] Add a section like the following to the section [realms] REALMNAME { kdc = pdc_ip_address } In the section libdefaults, set the default realm like so: [libdefaults] default_realm = REALMNAME Configure ntp as in [1] Add a line of the form server pdc_ip_address to /etc/ntp Start the service with /etc/init.d/ntp start Configure Winbind as in [1] with the following supplemental lines (note: the last few lines disable printing; this was good for the server I was using and suppressed complaints in the logs, but if you need printing take them out): realm = REALMNAME workgroup = DOMAINNAME security = ads idmap uid = 10000-20000 idmap gid = 10000-20000 template shell = /bin/bash template homedir = /home/%D/%U winbind use default domain = yes winbind enum users = yes winbind enum groups = yes winbind enum users = yes winbind enum groups = yes winbind separator = \ load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes Configure nsswitch Make the following changes to /etc/nsswitch: passwd: files winbind Then, update your configuration with ldconfig group: files winbind Join the domain with: sudo net ads join -U "DOMAINADMIN" Start samba and winbind /etc/init.d/samba start /etc/init.d/winbind start Test Run: wbinfo -u If you get a list of domain users, you're on. Otherwise, check logs and doublecheck yourself. Make the following changes to your pam authentication: # /etc/pam.d/common-account account sufficient pam_winbind.so account required pam_unix.so # /etc/pam.d/common-auth auth sufficient pam_winbind.so auth required pam_unix.so use_first_pass # /etc/pam.d/common-session session required pam_mkhomedir.so skel=/etc/skel/ umask=0022 session sufficient pam_winbind.so Try and login with a domain user. This can be done "at the box" or through an SSH session if sshd has been configured to use PAM This is almost verbatim from [1]. The changes occur in making an addition to /etc/hosts and restarting networking BEFORE continuing and in some extra lines to /etc/samba/smb.conf. Oddly enough, when I was working on a workstation instead of a server, Ubuntu's GUIfied version of this process was overly involved and a general pain in the neck. References Using Winbind to Resolve Active Directory Accounts in Debian Samba Documentation: Chapter 24: Winbind: Use of Domain Accounts
  • Running CL-SDL in CLISP

    I have been experimenting with ways to do this on and off, but I finally got CL-SDL loaded into CLISP and without the UFFI patches that are on sourceforge. It is the kind of thing that should not have been hard and, in the end, it really wasn't. It was just a matter of doing the research. I have learned more about Common Lisp packages, implementations, and FFIs than I would have expected on this little project. The main thrust is that UFFI does not support CLISP, though CFFI does. Fortunately, CFFI includes a compatibility layer that allows it to use UFFI bindings. While I had read this on cliki.net, it took a great deal more googling to figure out how to use the darn thing. On the lispwannabe blog, the writer shows an asdf package for uffi that loads cffi's compatibility layer into asdf as uffi. This is important, because a great many other things expect to find uffi there. At this point, using cl-sdl's example1.lisp works when I used the following code: (require 'asdf) (asdf:operate 'asdf:load-op :uffi) (asdf:operate 'asdf:load-op :sdl)
  • Loading CL-SDL...

    I have been playing with Latrunculi again as of late. With the contracting I have done, it has been a while since I have had the time, but here we go again. I have been working on a Common Lisp branch of Latrunculi (which can be found in Subversion under branches/clisp-branch). The reasons are several. First, there are no real bindings in Chicken to SDL or GTK. There are some half finished, sort of working ones, but I don't really feel like writing large quantities of binding code. Secondly, I wanted to learn Common Lisp. Thirdly, the threading works, but it is a nasty mess (as is all of the graphics code). Finally, some of Common Lisp's idioms and built in datatypes are a better match for what I am trying to do (real 2D arrays instead of vectors of vectors, anyone?). I don't really like using large quantities of SRFI code or bindings that are not compatible with any other implementation, which is another reason that Common Lisp seems like a good choice. In Common Lisp there are fewer implementations, but even the bindings are often compatible across multiple implementations (CFFI and UFFI provide this). One of the big goals here was to continue using OpenGL for the primary game rendering, but use SDL to load images, display text, and handle windowing. In trunk, I have written my own Targa loader (which does not implement all of the format, as I only wrote enough to load the textures for the game; which means that, when saving them, very specific options have to be set in Gimp for it to work...), created bindings for some obscure texting library (the link for which is dead and it would not have been a long term solution anyway due to its non-commercial license being in conflict with the GPL), and used GLUT for windowing and events. All in all, a mess that I want to clean up. Fortunately, bindings already exist for the libraries in which I am interested in the form of the CL-SDL project. Other goals of Latrunculi involve being cross platform (and that includes Windows) and having the ability to distribute binaries (since few users, even Linux users, compile from source). CLisp and ECL seem to be the best for this, both having Windows versions and compilers. ECL, I understand, has threading so I may use this in the end. With this background, the task seemed rather easy: load the bindings and go. The catch is that the choice of Lisp implementation was defined primarily by the cross platform compatibility as several implementations (SBCL and CMUCL among them) offer only support for *NIX platforms. Neither CLisp nor ECL have true UFFI support. ECL has a UFFI compatible FFI layer which, on the surface, seems like it ought to make the whole thing easy. However, I have not found an easy way to make use of this feature. So far, I can see a few possibly good ways to get this baby running: Make use of ECL's UFFI compatible FFI; most likely, this would include modifying CL-SDL's ASDF package not to depend on UFFI or to depend on ECL's FFI package or writing some code that "aliases" ECL's FFI to ASDF:UFFI so that everything else is happy and dandy Use the CLisp patches for UFFI and try to get it to run Use CFFI's UFFI compatibility layer to load up the bindings and use them This sounds harder than it really is, I think. Most likely, a lot of the problems I have are stemming from the precanned nature of using Lisp packages through Ubuntu's repositories. I am thinking that I will probably try to do this without taking the "easy" way out and using .deb packages. Instead, I will probably try to go from source beginning to end by hand and see if I get anywhere. I wanted to post a final explanation of whatever steps I got to work, but this little outlook may solicit some reaction or, at least, serve to get my ideas out.
  • Piracy

    One Cliff Harris recently wrote a post about piracy. I know, I know there are about a zillion articles on the world wide web about piracy. Loud bombastic ones promoting it, loud angry voices defending/promoting it, and timid little voices saying "can't we all just be friends...you know, get around a campfire and sing kumbaya?" Cliff Harris, apparently (I had never heard of him until running into this on reddit), runs a small indie-style game studio and posted a call for emails from pirates. The article above is his report on his findings. The real question he asked was why do pirates pirate? On the face, the question sounds like asking why a bird is a bird. On the other hand, even questions rooted in pedantry can be of interest. :) The list in the article itself is nothing that you probably haven't heard before if you have been following the issue even vaguely. I don't follow it much anymore; I haven't heard anything new in quite some time and it gets rather old reading the exact same flamewar again and again. If I want to read a good flamewar, I go back and read some archives of the Torvalds vs. Tannenbaum flamewar of the century. Despite being an old list, a few things stood out to me on a reread: No DRM. Of all the reasons presented, this is the one that I sympathize with the most. It is absolutely not a good reason to take something without paying the creator, but I could understand buying a copy then pirating it. That way, the creator gets paid and you get a DRM free game. In my experience, it is not the hardcore pirates that get hurt the most out of DRM. It is the honest, I bougth this and I want to use it type that gets burnt. I have written before about the effort it took to get around a bit of DRM for use with a legitmate copy of the game. In that example, I had a legal copy of Windows running legally in a VM on which I was trying to run a game that I bought in a store. Nothing shady about it. But it was a major pain. Demos are too short. This one is absurd. Someone may prefer a longer demo, but really: you need to pirate a game to try it out? My gaming budget is pretty small. I have been buying games once ever few months, I'd estimate. I could buy more, but I'd rather have a new book on my nightstand than another game for my PC or the Xbox. I seldom load demos of any kind. I usually look for a specific kind of game (action or strategy), then read the reviews on the latest and greatest. If something sounds good, I buy it. Otherwise, I don't. Somtimes I load a demo, but it has been a while. Usually, other people's impressions mean a lot more. Price. Irrelevant. I think gas costs too much. Does that give me the right to fill up my car and drive off without paying? Of course not. Why should games be any different? Many games are overpriced when they come out (the article quotes people complaining about "$60 games"). However, if you wait a few months something amazing happens: the prices drop. Far and fast. It isn't long until they run at $20. A little longer and they are at $10. Quality. I agree with the criticisms of the state of modern game development. Most games do lack originality, most are poor knockoffs of other games, and so on. But if the game's quality is too low to be worth paying for, then why play the stinking game at all? I have seen a lot of rotten games that I wouldn't buy--but then again, I wouldn't play them, either. Really, I think the sticking point is #3: price. DRM is a hassle that afflicts the innocent, but I think for the most part, the people who claim this as a reason (that is, if they even know what the heck DRM is) probably are not those most affected by it. Even in my example above, I was able to work around the issues I had. Whether people simply don't think the quality is worth it (the paradox of which I have already pointed out; if the quality isn't worth it, why use it?) and others unabashedly don't want to pay for it. All in all, it seems rather indicative of our society. Attitudes of entitlement, if you can get away with it, it's fine, and so on.
  • KDE 4...

    Now playing at Windows machines everywhere. Well, not everywhere, but at least at Computer World's. Faced with doing a big of web work, I fired up into my Vista partition so that I would have the glory of IE 7 at my beck and call...and because we had been running some tests at work on it and I was too darn lazy to reboot into Linux. I saw the aforelinked article a little before this most auspicious occasion and decided to download KOffice on Vista. The reason? I wanted to use Krita for my image work, rather than gimp. The reason is a tired old one, but still true: I hate the way the gimp creates about a zillion windows, cluttering up everything. Usually, MDI is a bad thing, but image manipulation is one of the few occaisions on which I would personally sanction it. Heck, without virtual desktops (either the built in ones on Linux or through the fine add on Virtuawin for Windows) I'd say that the gimp is well nigh unusable. Anyway, I digress. The article on Computer World is actually pretty favorable towards KDE 4 apps on Windows. I really wish that the situation were as sunny as they made it out to be. I used, or tried to use, Krita and Amarok (which is the finest music player, IMHO, that I have ever used). Krita hung and crashed and Amarok, well, I gave up on Amarok: I have all of my music on my Linux partition which I mount on Windows as the L: drive. I figured that I would specify the path to Amarok for the collection and away I'd go, listening to my oggs happily (which is a pain to get set up on Windows Media Player: it requires a separate codec download and still fails to show any ogg files in the collection, just MP3) except for one glitch: in its current state, Amarok 2 on Windows will not allow you to select a directory on any drive except the C: drive. Now, I understand that Amarok is in alpha and that KDE 4 isn't much better than one, but I will say this: I can't really use the KDE 4 suites on Windows for my main work, yet. It is just too flaky. I hold out high hopes. As a developer, I understand that new software requires some work to get fully polished, but KOffice 2 isn't quite ready to challenge OpenOffice as the best Office clone (which is another rant for another time).
  • Recaptcha

    Well, I installed recaptcha on this blog last week. I was sick of receiving tons of spam for such sleaze as I would rather not think of. Recaptcha was selected on the recommendation of a colleague. This past week, a form that we set up for a client was creating spam so bad that our (legitimate) servers were being blacklisted all over the globe. Finally, we put in the captcha (over objections from more sales and marketing-oriented minds) and it, combined with changing the static IP for our mail server, got us unblacklisted. So, it worked at work and I am happy to say that I have not had any spam in "awaiting moderation" since installing it. I am equally sure that it will see use in a website that I am currently building. The other cool thing about recaptcha, besides the fact that is an excellent captcha in its own right, is the somewhat novel method used for generation and verification of images. From their website: " reCAPTCHA improves the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher. More specifically, each word that cannot be read correctly by OCR is placed on an image and used as a CAPTCHA. This is possible because most OCR programs alert you when a word cannot be read correctly. But if a computer can't read such a CAPTCHA, how does the system know the correct answer to the puzzle? Here's how: Each new word that cannot be read correctly by OCR is given to a user in conjunction with another word for which the answer is already known. The user is then asked to read both words. If they solve the one for which the answer is known, the system assumes their answer is correct for the new one. The system then gives the new image to a number of other people to determine, with higher confidence, whether the original answer was correct." Cool, huh? It also occurs to me that the usages of this could go well beyond aiding in the OCRing of a bunch of documents. If their OCR software is using neural networks (and today, whose isn't?) the amount of training data that could wind up in their particular network is nothing short of astounding. It would be nice if we could see the end result! The project itself is being run by Carnegie Melon so I'm sure that if anything truly interesting comes of it, something will be published. That said, the site doesn't seem to contain any references to the influence this could have on artificial intelligence and character recognition so I can't even be sure that they are trying to observe the pattern matching or if it is just a bright idea to improve on existing QA methods for OCR. Now, on Friday no less, you get a twofer: captcha recommendation and rambling on a tangent about the AI involved. But, that is how the MCS's mind works.
  • InnoDB Diversion

    At the MySQL Performance Blog, the good writer took time out recently to show us his script to convert tables to InnoDB. Recently, I also had to convert a large quantity of MyISAM tables (come on! you're better off with SQLite if you're going to use MyISAM for an application) to InnoDB. My approach used, not the fine tools from Maatkit, but good old Bash in conjunction with the MySQL command line client: #!/bin/bash echo 'show tables' | mysql -uroot -ppassword $1 | sed "/^Tables_in_$1$/D" | awk '{ print "alter table " $0 " engine=innodb;" }' Naturally, root's password would not be password and, if you are on a server hosted/administered by someone else, you would not want to leave the password in the shell's history, but you get the idea. The advantage: on a Linux box running MySQL, you can depend on bash, the mysql client, sed and awk being installed a lot more than Maatkit.
  • XSLT is AWK in 2000...

    At work, I have been working on a bit of functionality to allow the users of our system to fetch quotes from a vendor live in our system. Now, this particular vendor offers an XML API as the one true way to handle all of this. You send them a query crafted in XML and you get back an answer crafted in XML. However, XML is not really human readable or, at least, not human presentable. I had heard of XSLT before and knew that it was a way to transform XML documents, however, I had no opportunity to use it as I have fairly little contact with XML. This seemed like the perfect opportunity to learn and use a tool ideally suited to what I was trying to accomplish so I dug in. First, before I go any farther, let me say that: I LOVE XSLT! At least, insofar as I can love anything that comes in an XML package. It allowed me to do exactly what I needed and I can already forsee other uses for this. So, to sum up, this isn't a rant, but XSLT ends up being almost a letdown. When you hear the acronym and see the terminology being slung around, it sounds like there is a great deal more involved. XSLT, though, is basically awk in the twenty-first century. In transformation mode, you basically list a series of templates along with what you would like the output to look like, using special tags to insert values from the current segment. Compare, if you will: Awk: $2 ~ /foo/ { print "hi" } XSLT (forgive the square brackets being used in place of angled; a hazard of Wordpress) [xslt:template match="foo"] hi [/xslt:template] The idea is somewhat contrived and makes assumptions as to what Awk will regard as a field and line separators. However, the idea is demonstrated: in both cases the "program" (awkinology) or "template" (xsltinology) is a list of rules, each of which is applied to the input. Indeed, the whole sequence is not unlike several attempts made at an XML comprehending Awk. Though, in my opinion, XMLGawk is a little more awkward than XSLT. One important difference between XSLT and Awk is that, if multiple rules match, Awk applies them all whereas XSLT has a ranking method to decide which rule to apply. It is not in vain that it is written that "there is nothing new under the sun."
  • Prettying up PHP Code

    I love functional programming languages. I really do. One of the things (among others) that is very nice about functional languages is that they are far more predisposed towards writing beautiful code (the benefits, of which, are a common enough topic). At work, I don't work with functional languages, I work with PHP. While PHP 5.3 is well on its way towards having the key features of a functional language, its roots are clearly as a simplified Perl. As such, it is a language particularly prone to having ugly code written in it. I was working on some code that I knew would be fairly ugly and, having read of some PHP pretty printers in the past, I regoogled for a bit. I eventually settled on the PHPBeautifier tool at PEAR. It seems to work fairly well and the filter architecture looks promising. After some tinkering, the following command got (for me) fairly good results: php_beautifier -f $1 --filters "IndentStyles(style=allman) ArrayNested NewLines(before=T_CLASS,after=T_COMMENT,before=if)" In simple terms this specifies the following rules: Allman/BSD indention style  Nest arrays Newlines before classes and if statements Newlines after comments It certainly isn't a perfect match for my coding style (the definition of perfect, right? ;-), but it came pretty close on a pretty good test. Things I would like to tweak but couldn't get to work with the current filters (it would probably require creating a new filter or altering an existing one): Newlines after certain blocks. For example, I usually put a newline after the end of an if/elseif/else sequence but I couldn't find a way to make this work reliably Indention of HTML in a sequences of echoes. In the code I have to maintain, there are long strings of echo foo, echo bar, echo baz. Not the way I would do it (I prefer either using templates or, if that is not an option, using heredocs with echo), but I also don't want to have to rewrite it all. The option to indent all code between the opening and closing tags. This helps when it is interjected into a ton of HTML. I honestly don't know if I would use it all the time, but it would be nice to have. Admittedly, this is nitpicking stuff. php_beautifier is passably documented (finding command line invocation methods in the docs is kind of a pain as it is kind of an after thought in a sea of library docs) and does an excellent example at pretty printing PHP code.
  • Windows Terminals

    A colleague of mine was out for a few days this week and, for reasons of efficiency in our hectic, you-never-know-what-is-going-to-happen schedule, I moved from a back room to his desk. Of course, I had to borrow his PC with the desk, as lugging PCs around for a couple of days would be a waste of time. So, I settled down to bang out code on his Vista box, where I have been spoiled all this time on a Kubuntu 8.04 desktop running KDE 3.5. Now, to put this in perspective, my ideal IDE is screen, a shell (bash being my favorite, for now), and my array of other tools (grep, find, awk, sed, etc.). This friend of mine had installed Cygwin which, by default, runs under the DOS terminal emulator under Windows. I do not think it unfair to say that the DOS terminal emulator is, perhaps, the worst I have ever used. Being spoilt on far better terminals and perferring to work from the terminal, I went off in a quick search for what general purpose terminal emulators there are out there for Windows. Here is what I found: • Putty - putty has a built in terminal emulator that, in my opinion, is wonderful. However, it is only for SSH connections so I can't run a Cygwin shell through putty without pulling some stupid trick like running the SSH service from Cygwin and then logging in through putty. • Poderosa - very nice emulator, written in .NET 2.0 (so it is Windows specific) and backed, from what I understand, by the Japanese government. Poderosa sports a tabbed interface and nice point and clicky love for configuration. In that sense, it is not unlike Konsole or Gnome's console. • Rxvt - Cygwin ships with its own terminal emulator, rxvt. This emulator is, almost inexplicably, not installed by default but, rather, an add on package. To make it the default, you also need to edit C:\cygwin\Cygwin.bat • Terminator - Terminator is, according to the author, written primarily in Java with a smidgeon of Ruby and [what was it?] In addition to making life quite livable on someone else's PC, I have also gone ahead and added these to my own Vista partition. As for which is my favorite...I can't really say, though I think that Poderosa is becoming a favorite as it is the roughest equivalent to Konsole that I have seen for Windows. Putty, on the other hand, has long been a companion of mine. A single, small executable, Putty is, perhaps, the most convenient way to get an SSH connection going from a borrowed computer. Rxvt is, of course, more in the *NIX tradition of terminal emulators. The point here is less to indicate a preferance than to create a good old fashioned list. References • http://en.poderosa.org/ • http://blasphemousbits.wordpress.com/2007/02/12/rxvt-solves-many-cygwin-woes/ * http://www.chiark.greenend.org.uk/~sgtatham/putty/
  • Setting up a connection to a Windows VPN from the command line

    I, like many people, work at a place that uses a Microsoft-based VPN, but I want to use Linux to do it. Recently, I was trying to VPN in to work to get some, well, work done. So I retackled the whole problem again. I tried two GUI utilities to set this up. I couldn't get K/NetworkManager to do it and I only got KVPNC to work after mucking around in some configuration files. Ultimately, after a great deal of googling and experimenting I used this sequence of steps to get up and running, all command line. In the long run, this is nicer anyway as I often like to configure these kinds of things over SSH. Command line setup notes: 1. Make sure that pptpclient, ppp, and pptpd are all installed 2. Set the following options in /etc/ppp/options.pptp: lock noauth refuse-eap refuse-chap refuse-mschap nobsdcomp nodeflate 3. Add a line of the following form to /etc/ppp/chap_secrets DOMAIN\\USERNAME REMOTENAME password ips where remotename is the name that you want to show up for your computer on the remote network, password is your plaintext password, and ips are the addresses (set to * on mine) 4. Chmod previous file 600 5. Create a file named after your connection in /etc/ppp/peers with the following options remotename REMOTENAME ipparam [the file name of the connection] pty "pptp vpnserver --nolaunchpppd" name DOMAIN\\USERNAME usepeerdns require-mppe-128 refuse-eap noauth file /etc/ppp/options.pptp 6. Run pon [connection name] 7. Add DNS servers to the end of /etc/resolv.conf 8. Add routes with the command: route add -net .0 netmask 255.255.255.0 dev ppp0 9. To log off the VPN, run poff [file name of the connection] References http://ubuntuforums.org/showthread.php?t=91249&page=4 http://wiki.archlinux.org/index.php/Microsoft_VPN_client_setup_with_pptpclient#Configuring_and_Connecting
  • Don't Trust Users

    One lesson that I have learned working "in the industry", and am still learning it seems, is never to take the user literally. Very often we, as programmers, hear, either directly from the users or indirectly through designers and/or salesmen, what the users want, and well we should. There is, however, a fountain of evil in taking all of their statements at face value and implementing them directly into the software. In my own travels, for example, I had built a tracking system for a client. Included in this tracking system was user management and the client asked for a way to delete users from the system. Now, like any half-way civilized system, the user IDs were used to track the user's activities through the system, not the name so doing as the user technically asked (deleting out the users) would have been ruinous on the archived data, which still needed to work properly. Lest someone think me completely insane, users could be deactivated from the system in the original design. Once deactivated, the name would still show up under user management (so that it could be enabled once again, if need be), but would not be available for anything else (ie. new activities could not be performed under that name). An explanation of the design decision did not suffice. So, I slapped a button up with the word "delete" on it, removed "deactivation", and made it so that "disabled" users did not show up under user management. After this was done, the user was satisfied. Bear in mind, all that really changed was adding a magic button that said "delete" on it, but the action performed was the same. Yet the customer was happy. The moral here is not that the user is stupid for not noticing or that the user should care about how the changes are implemented, but that you don't implement what the user wants in the literal sense. Rather, you give them the appearance of what they want. To some, this may not exactly seem like a ground-breaking revelation. However, I have seen many systems, including some of those where I work now, in which the user's word was taken at face value and implemented as stated under the hood. The easiest example I can give, without really giving details, is one where the users said they wanted two systems that were completely separate. Yet, their requirements made it clear that these two things were not meant to be entirely separate at all. The coders who implemented the design followed the users' demands literally, under the hood. Meanwhile, as we continue to extend and maintain, life is being made increasingly difficult because of a technical decision based on the users' "wisdom". A "real-world" caveat is in order here: implement the appearance, but not the technically correct, of what the user wants, but don't tell them that the decision was ever made, let alone that you didn't do it their way. The reactions to such news will vary, from chewing your ear off with an explanation of why they are absolutely correct to an outburst of anger, but it will never be pleasant. Just some advice from someone who has hit this and seen it hit a few times. Frankly, I think it is more polemic a problem in either contract work or developing software for use within a company where the users get a more direct say in the process. In some ways, this is a good thing, as the potential is greater for the users to get what they genuinely need, but when unqualified personnel start dictating technical decisions, you get a classic Dilbert scenario.
  • An interesting workaround...

    I fell in love with Creative Assembly's Total War series when I picked up a copy of Rome: Total War a little ways back. Comparably speaking, I do not game much, but, when I do, I am a strategy gamer, first and foremost. When I bought Rome, the expansion pack and gold edition were out, but the original was cheaper. I didn't feel like paying for the extra expansion pack when I couldn't even be sure that I would enjoy the original game, so I bought the original. It worked fine on my XP box and, when I bought a laptop running Vista Home Premium (which was swiftly repartitioned to dual boot with Linux; Windows for .NET and games, Linux for real work), I was pleased to find that the game ran without a hitch. By this point, I had become a Total War fan, so I recently purchased the Total War: Eras package, which contains Rome, Medieval, and Shogun with all of their associated add ons. I uninstalled my ordinary edition of Rome and got started installing the Gold editions of each of these classics. Shogun and Medieval went through wihtout a hitch, but when I got to Rome, I could not get the installer to run (when it manually ran, it crashed with a cryptic error message) and the auto play would not start. I googled around for a bit and, what I found, was that the problem appeared to be that the DemoShield software that was used with Rome: Gold didn't run properly under Vista. Interesting and somewhat disturbing. Then I had an inspiration: I took CD 1 of the original Rome (it shipped in a 3 CD set originally) and got the AutoPlay menu. Then, I switched the Gold Edition DVD that came with Eras for the original CD and hit the "Install" button. The installer ran and even installed Barbarian Invasion. The same process was needed to get the Alexander pack to install. So, if anyone else out there has had some issues with Vista and a newer Rome...
  • QuickBooks Note

    The previous post (which is the "real" post) was an article/tutorial/note set that I wrote after doing a QuickBooks integration project. Hopefully, it will make life easier for some poor schnook out there.
  • QuickBooks Integration

    The QuickBooks API ("the API" or "the QB API" hereafter) is anything but quick. The API itself has an unintuitive, somewhat convoluted structure. Worse is out there and, truth be told, it wouldn't be all that big a deal except that the documentation is terrible to boot. What little exists is mostly a patchwork of poorly written and largely incomplete examples. The most primitive reference is bur‪ied. This article is a collection of my notes through QuickBooks divided into two sections: the basic structure and idea of the API and some hints for making the way through the documentation. It is important to realize that the QB API is really a family of APIs that work off of a common infrastructure. You can do the same things with any of them but tasks will be easier/harder, longer/more concise, etc. depending on which version you use. The two I know of are qbXML and qbFC. qbXML is an XML schema for the requests that get sent to the server and the responses that get returned. On the other hand, qbFC is a hierarchy of classes that model qbXML, but allow you to build the requests programmatically with greater ease as you do not have to work through DOM. When all is said and done, qbFC is nothing more than a pretty face for qbXML. Once you run the requests, the XML is built automagically and the responses parsed out automagically into objects. Below, I will be discussing the qbFC API. The main reason for this article is that I saw many QuickBooks example snippets sprinkled across the web and a handful of pages that Intuit generously refers to as documentation, but no real tutorials or explanation as to the how or why of things. While I cannot explain the why, I can, at least a little, shed some light on the how. A Hitchiker's Guide to the API It all begins with a session. That is the QuickBooks motto. We begin by creating a session, then create a message set. In the QB API, a message is a generic term for interacting with QuickBooks. It can be a query for information, an a bill addition, an invoice addition, pretty much anything. After putting together a set of these messages into a message set, the session is told to run all messages after which it returns a response set. The user can then cycle through these, testing the type to determine what to do with it (i.e. is it a vendor result? information on an invoice that was added?). That is the executive summary. So, what does that mean in code? It means, first of all, that all interactions with QB follow pretty much the same template and that getting it all together is pretty much a matter of dotting your i's and crossing your t's. A Stereotypical QB Session Here is a simple example of this, written in C# using the QuickBooks object-oriented API, that adds a new vendor to QuickBooks: QBSessionManager sessionManager = new QBSessionManager(); sessionManager.OpenConnection("appID", "Create Vendor"); sessionManager.BeginSession(quickbooksFile, ENOpenMode.omDontCare); IMsgSetRequest messageSet = sessionManager.CreateMsgSetRequest("US", 7, 0);
  • Windows is the new UNIX

    I recently finished reading the "Unix Hater's Handbook" all the way through for the first time today. For those not in the know, it is an over-the-top, semi-serious, screed against the evils of UNIX in the 70's through early 90's. For the curious, a full copy can be (legally) found online at http://research.microsoft.com/~daniel/unix-haters.html It is actually quite interesting to go back a ways for those of us who are of the newer generation of computer users. To put things in perspective, my first computer wasn't a Tandy, it wasn't an Apple II. I didn't travel down to the local RadioShack and plop down a thousand dollars for a pile of components, a soldering kit, and a manual that I then used to build a primitive system. My family was actually late getting into the computer scene which wasn't really surprising given that we were behind in every way technologically. In the late nineties, we boasted a large black and white television with VCR, broadcast television (no cable or, later, satelite). The first computer in the family was an outdated 386/486 given by a friend of dad's. My start in computers was after the advent of the still-running AoW (Age of Windows; sounds like a good game title). I am what you may call a second generation Unixer (and LISPer, Vimmer, and ...). My first Unix was Linux, Fedora Core to be specific. I took it up, originally so that I could claim familiarity with it on my resume, but swiftly fell in love with it. As my main software was already Linux-compatible (Firefox and OpenOffice for desktop apps, I did Java programming during the classes I was taking, and used the MingW ports of the GCC for my tinkerings in C++), the transition was painless. This, however, is drifting away from the original point. The point is that, as someone who is a Unix-guy in a post-Unix era and never lived during the Age of Unix, the time trip is always fascinating. It is a strange fascination, I know, but I love to read about defunct hardware and software thinking about that mythical day in the future when I say I wrote in Python and the answer is "Python? That's ancient. No one's seriously used Python for 10 years. Try Titanium. It is the coolest language ever". It is like hearing the battle stories of an aged warrior. So, when listening to these "battle stories", I found it most interesting first and foremost that my battles stories will, someday, sound just the same--except with the terms Unix and Windows interchanged. Here are, in my opinion, the highlights of their complaints, the executive summary, if you will. Unix isn't user-friendly Unix is poorly documented Mail is a mess (delivery fails, goes to the wrong place, etc) Usenet Unix is insecure Unix is bloated and slow Unix requires a lot of hand holding, feed and care to keep up and running. This is a nightmare for sysadmins The file system C++ The shell, pipes, and "Power tools" Programming environment Even after reading the book, in its entirety, and enjoying it every step of the way (so I am not flaming someone who said something I simply didn't like), I still don't understand why a couple of those items count. Usenet, while being run primarily by Unix boxes, is not Unix--you don't need Unix at all to run it, as the authors themselves acknowledge, if I recall correctly. Moreover, #3 is questionable as it refers to userland apps. Sure sendmail is awful (as a matter of fact, it is the only program that I have used where make had to be run after changes to the configuration files--a rather frightening notion in its own right), but that makes Unix awful? Word stinks, in my opinion. Does that make Windows trash? Or take C++. That is a programming language that is anything but unique to the Unix world. Until the Java craze, it was the thing in the Windows world as much as the Unix one. Complaining about C++ and saying that this is a Unix problem is as ridiculous as complaining about Java and calling it a Windows one. Back on topic, though, some of the remaining complaints are valid and others are purely a matter of taste. But what stands out about this list? Is it not what Mac and Linux advocates are complaining about in Windows? That the system is slow, buggy and insecure? That Windows servers require a lot of attention to keep running and relatively secure? Why, I do believe it is! Why would that be? Do we really learn so little from the systems that were built before ours? I think that part of the problem, though, lies in being the dominant system. Unix "back in the day" and Windows now were almost ubiquitous as the OSes used to run the computers of the day. As such, the OS sees a wide range of usage scenarios that, if left unaddressed, leave unhappy customers who leave the reservation. So, the company adds something here and something else there and people are happy--for now. Until the weight of such additions starts to weigh down the system and strains it to the breaking point. Another common thread is that, in both cases, the technology became dominant before it was mature. Windows 3.11 wasn't what you would call a mature product when it was pushed out, but if filled a void in the market and then used that position to make sure that others couldn't do the same. Somewhat similarly, UNIX was let loose on the big wide world shortly after its creation (to play a game on a PDP machine, no less!). Both grew, but neither grew up. Unix (non-caps) has grown up a lot since Windows took over the world, but Windows, in a lot of ways, still hasn't grown up. The base system still shows its single-user roots and "ending a task" means pleading with runaway software to die and go away. So, why did Unix mature after it was pushed out of the spotlight? Capitalism and market forces. It had to improve enough not to die. Likewise, Windows is "borrowing" many ideas from its competitors as they grow in market share. Vista is a sad OS X impersonator in terms of desktop gadgets and eye candy. The PowerShell is an attempt to create a bash/.NET hybrid. All in all, things are getting interesting, but I think the core element here is competition. A monopoly hurts, not only whoever else attempts to start a competitor, but also the consumer, depriving them of the state-of-the-art in exchange for the inertia of the monopoly in question. Contrary to what many a fan boy would claim, the best thing for the industry is not an Apple takeover. Nor is it a Linux takeover. It is to have at least two, preferrably three, big competitors in a dead heat. Whether that is Mac/Windows/Linux or MINIX/Haiku/Sun doesn't really matter. When this happens, you will see the following viscious circle: Company A creates some dandy feature X Company B copies it and extends it trying to EEE (embrace, extend, & extinguish) Company Who is A and who is B will change and cycle in its own right, but neither will remain stationary. The customers wants will be heard because if A doesn't listen, B will. If B won't listen, C will. If none of them will, someone will start D up and supplant them all. In conclusion, we need to remember two things: Competition is good "Those who don't understand UNIX are condemned to reinvent it, poorly." – Henry Spencer
  • Nifty

    I was doing some work this morning that involved writing large amounts of very repetitive PHP code. I, personally, have a bad tendency to write large quantities of untested code when the structure is inherently repetitive (also using vim's regex search and replace to do as much of the work as possible). Then what happens is that I run it and find all of the stupid syntax errors I made while typing faster than I should without testing as often as I should. Now, the CLI PHP interpreter has a nifty option: delint. Why delint? I have no idea, but it checks the files for parse errors. While this doesn't exactly solve all programming bugs, it solves the ones most common on the project I am doing now. So I wrote a nifty little bash script that will take in a list of files, run php -l on all of them and, if there are errors, display them in a slightly more readable fashion than you would get straight from the command line. I am releasing it here under the BSD license (which is appended as comments to the end of the file). I know, I know. It's a short script to attach a license. But, if you have any doubts as to the necessity of such a move, read this. Download it here.
  • The Trouble With Frameworks...

    For a contracting job I am working on, I have been using Telerik's Web controls and it has brought to the forefront of my mind the problem with a great many of the glitzy toolkits. Most toolkits (especially the ever-plentiful GUI variety) make hard things hard and easy things impossible. Case in point: my work with Telerik. Databinding, sorting, and client-side events were still a major pain but I still can't get the windows in the window control to stop hiding the other dialogs. It seems that in the rush to say "we have more features", many of these projects forget that most of these features quickly become a tangled mess to the control set's user. On the PHP side of the world, I have written some basic grid classes of my own. They are not nearly as featureful as Telerik's, but they are much easier to use. No ASPX or long-winded declarations, just declare an instance of the class, pass in some values (column names and such) and a MySQL result resource and BANG! you have a grid. Any other functionality can be easily added on or the whole darn control can be inherited and extended to the specific use--and herein lies the problem with the big commercial solution. With Telerik, I get a big monolithic blob. It either works for me or it doesn't and it tries to do it all. If the feature set were minimalist, covering the most painful aspects of building such a grid, I could readily build it up, through inheritance or scripting, into just what I need for my project. As it is, I just try to push my way through. This isn't a commercial vs. open source argument. It is the idea of providing a minimalist starting place and building or starting with a palace and trying to strip it down to a garage. It is also the same reason that I favor Gentoo over Fedora: it gives me a minimal system and the tools I need to build the system I want. With Fedora you get everything and the kitchen sink--and if you don't want that much, you have to scale it down. Here is the upshot: it is easier to take something small, but extensible, and build up then to take something static and monolithic and scale it down.
  • Microsoft Windows DE (Dream Edition)

    A great many Linux/Mac/BSD enthusiasts declaim what is wrong with Windows and how their OS of choice addresses these issues. I, obviously, share something in common with these people because my system of choice is, for now, Gentoo Linux. What I fail to hear from any of them is this: what can be done to make Windows a good system. No, base it off of BSD is not a valid answer. Because that is not making Windows a good system, that is rebranding BSD. So, as someone who rides both sides of the fence, here is my list of things that would make Windows a much more comfortable system to use. Stability. "Windows XP is pretty good!" is the cry I typically hear right about now and it is correct. It is pretty good, but it still falls far short of what it could and should be. When a program hangs in Linux, I run kill -9 on it and it's dead. Under KDE, if you try to close an unresponsive program you will get a popup asking whether you want to terminate the program. If you do, the program dies--immediately. Under Windows, the computer still locks up as a whole (whereas it doesn't on Linux) and you have to try pretty hard to end a runaway process. "Do you wish to end this process?" Yes. "Are you sure?" YES. "Do you wish to end this process?" YES! YES! YES! PLEASE KILL THE DARN PROCESS! Shell. "It's the 20th century. Who wants to use the shell?" Well, I for one still like the shell. It has been pretty well proven that if you know the keys that the keyboard is still faster and, for those repetitive tasks, nothing beats the shell for scripting. Various attempts have been made at GUI scripting (Apple's pre-OSX AppleScript, Windows "offers" a little with VBA, and KDE offers DCOP), but none of them offer the ease and flexibility of shell scripting. The closest Windows has come to a shell interface is offering a DOS shell (before the killing of DOS) and a DOS emulator (after the DoDOS). I have taken to installing bash and Cygwin on every Windows box that I use consistently to handle this deficiency. The Windows PowerShell is mostly a bash clone (with some neat .NET interop features which are wickedly cool, going so far as to allow command line scripting with business objects stored in DLLs) and would do quite well. I would like to have seen PowerShell about ten years ago and packaged with the system by default. The existence of the DOS shell doesn't hurt Windows usability and I don't see why packaging PowerShell in its place should. Virtual Desktops. This handy dandy little tool comes on virtually every Linux desktop system and it is something that quickly integrated itself in my workflow. I use four desktops and usually sort my windows into them by task. I would really like to see genuine virtual desktops available in Windows. Even Mac has only provided half-hearted support for this feature and both have "borrowed" from the Linux/UNIX tradition. "What kind of crooks are you? If you're gonna steal, steal!" (unsourced; you'll have to figure out where it came from). Package management. Apt, Portage, Ports, and RPMs all offer an integrated way to manage software. With Windows, you see a plethora of installment packages each of which functions in its own special little way and uninstalls in its own special little way. It would be nice to coalesce these functions into one API and one stop. As a side note, it would also be nice to be able to grab Microsoft's free downloads from within this system by default. For example, as a developer I require (yes, require) .NET 2.0 and 1.1 and I plan to install the 3.0 SDK soon. When installing SQL Server Express 2005, I had to go out and manually fetch MSXML 6. Why on earth isn't it packaged with SQL Server or, at least, grabbed and installed automagically? Speaking of user friendliness... My point here is explicitly not that Windows should morph into Linux or Mac. Rather, it should at least learn some real lessons about what is good in those systems and incorporate it into Windows.
  • Visual Studio as one big, honkin' calculator

    All, or almost all, of us who have taken Algebra level math or higher know that, as the levels progress, the allowance for calculators increases. For grade school math through Algebra, I was not allowed to use a calculator (in Algebra, an exception was made for trig functions, but that was it), but the math had to be done by hand. The reason is obvious: if you have to do it yourself rather than punching it into the calculator, you will learn the concepts not the operation of your new Itanium TI-7819922-ZETA. In Calculus and Physics, we were allowed to use more powerful calculators. Each step of the way, the emphasis was on learning how to do the math and  the calculator was a tool. The problem with the calculator as a tool is that it can do several levels of the math before it. This is a real time saver if you know the work that the calculator is doing, but a reliance on it can destroy your understanding of mathematics if you allow the calculator to do your work for you. Why am I bringing this up? Have I had a sudden, wonderful burst of nostalgia? No, I've been doing some contracting work with the wonderful ASP.NET controls made by Telerik. They really are good components and, for many features can slice development time. "Great," you're asking, "what does that have to do with calculators?" Just be patient, I'm getting there. What I noticed about the documentation for Telerik's controls was that a lot of information and "Quick Start" information is given in terms of using Visual Studio's form designer. Now, to their credit, the raw classes and tags are there, but the emphasis, as with many texts, is on using the pretty GUI tools. This got me thinking about what Visual Studio is. It is a wonderful tool and, like Telerik's controls, can be used to shorten development time, particularly on large projects. The problem is, however, the people who use VS without fully understanding what is going on under the hood. What code is being generated, what techniques are being used. I'm sure we've all seen code where things were labeled TextBox1, Button2, etc. Usually, I've found is that the people who leave those names in tact are the ones who do not really muck around in the code. They try to cobble visual data sources to visual controls, writing as little code as they can possibly get away with. The result is, of course, bad code. It is analogous to the person who survived algebra by punching numbers into his calculator. He moved on from one level to the next, but he never really understood the concepts involved. In this sense, Visual Studio is like a calculator: a timesaver for the adept, and a pitfall for the novice. I am not saying that Visual Studio is bad or shouldn't be used any more than I would say that a calculator is bad and shouldn't be used. The final point is that Visual Studio, and tools like it, need to be used in their place. And they should never, EVER be used to teach the concepts.
  • Configuration as Programming

    Between some of my contracting work as of late, which has involved installing/upgrading other people's servers, and rebuilding my own home network server (that provides such glorious services as printing and music to any location in the house via wireless) I have spent a lot of time as of late digging, poking, and prodding through configuration files. From mail servers, to web servers, from PHP to MySQL, configuration files are just a dime a dozen. Of course, the first "version" of the configuration isn't usually correct, any more than the first version of a given source file or class is usually correct. So, at the end of the day, what do we get? Configure, restart the appropriate services, and test the functionality. Then find out that that foo's config file conflicts with baz's config file and the whole process starts up afresh. The point is, that the cycle very much resembles that which would be considered "development": edit, compile/interpret, and test. Just an encouraging thought next time you're poking through goodness knows how many files: you're coding just like the guy sitting encamped in front of his C# compiler.
  • A Bit of Software

    My wife loves Stargate fanfiction, but would like to be rid of the foul language. I had created a few shell scripts (that used bash and awk in unision with a flat-text file containing regular expressions), but she soon found these too arcane to use. Problems crept up that I had never had before and I haven't been able to reproduce. What to do, what to do? Well, I sat down and wrote her a brand new version in pure C# (not that she cares), with a pretty GUI (well almost; I hate the way the widgets run to the sides, but I have not yet taken the time to fix it), that access an SQLite database. I am now releasing this auspicious software under the name of Swearinary 0.1 to the public under the Mozilla Public License (MPL) v. 1.1. No doubt you will soon read about updates to this selfsame software. If you're interested, pop on over to the projects section and grab the tarball.
  • Elegance

    Elegance -- that one mystical property that all geeks hold dear to their hearts. Whether we are programmers, computer scientists, mathematicians, or physicists, we all value elegance. In fact, it can be said that it is this appreciation that separates the true geeks from the interested by stander or tradesman. It goes beyond a simple curiosity to a true appreciation for an innate beauty that the outsider cannot understand. Fine, that is all nice and good, but what is elegance? I mean really. We can look at things (programs, theories, solutions, etc.) and say whether they are or are not elegant, but what is the litmus test? The first one to chime in that it is entirely subjective gets whacked on the head with the planet Neptune. Elegance is not in the eye of the beholder. Elegance is the ability to boil the entire solution down to one unifying idea that holds throughout the whole. By extension this means that the fewer special cases are involved, the better. The perfectly elegant solution wouldn't even have to handle special cases--they would all be taken into account automatically. Of course, we seldom get that far, but that is the goal. Yet one last way of stating it is, the closer you've come to completely satisfying Occam's Razor, the more elegant your solution is. Don't believe me? Let's take some examples. Particle physics is about as inelegant a solution as one could ask for. Sure, it works (in fact, it was the best experimental track record in modern science), but physicists are increasingly dissatisfied with it because it is inelegant. Let's take a closer look. Calculus. Now Calculus IS elegant. No whining stories about how hard it was to learn or whatnot, it IS elegant. The finest example of just how elegant it was, for me personally, was in Physics II during my last undergrad semester, when we started doing Gaussian surfaces. The professor showed us two ways to handle the problem, with a convoluted algebraic formula that we could memorize, or a couple of integral equations that we could memorize and use to derive the equation needed for the problem at hand. The second method won hands down. What about what I said earlier? Can Calculus be boiled down to one essential idea? Yes, it can. It is the idea behind the limit and, consequently, behind the derivative and integral as well: it is the idea of breaking some potentially horrifically complex surface/line/whatever down into infinitesimally small pieces which are relatively easy to compute, and summing them up to get an estimation for the answer. In theory, if we could sum a truly infinite number of these little deltas, we would get the One True Solution. In practice, the error is so small we take the answer to be correct. So why is it important? Is it a mere aesthetic pleasure? No, I would argue not. I would argue that it because elegance is tied to simplicity, that elegant code is better, more maintainable code. Notice here that there is a big difference between "clever" code and "elegant" code. Most clever code is not in the least bit elegant. Very often, "clever" code relies on a non-obvious complex property of the system to make itself work, thereby making it harder to understand what it is doing, how, and why.
  • A Day at the Opera

    I have had a spurt of web programming and web design work as of late. Along with a fresh new openSUSE install, this has driven me to give Opera 9 a try. I've been using it as my primary browser for a couple of weeks now leading me to decide to pen a few words on this venerable piece of software. Opera is the Apple of web browsers. It is probably the prettiest browser you can use. I personally love the way it renders. Everything is very smooth, particularly the fonts. It isn't even something I can really put my finger on, so much as it is just the general experience. Also like Apple, Opera is an "all-in-one", no configuration, relatively little customization. If you like it? Great. If you don't, tough. The package IS the product. From Bittorrents, to adblockers, just about everything you would normally download an extension for in Firefox comes packaged right into Opera. Whether you consider this good or bad is an entirely subjective matter. On the whole, I find Opera to be pretty much on par with Firefox. There were a few things that I did like better. Not really technical issues, just preferences. I liked the way Opera automatically alphabetizes the bookmark listings. I keep a lot of bookmarks (which makes the Opera/Firefox/Netscape parlance of bookmark more accurate for me than that of Internet Explorer), so this makes it a lot easier to dig up older resources. As crazy as this sounds, another thing I liked were the JavaScript errors, which mattered because, as I said, I was developing a web app. I did have only complaint: Opera crashed on me--a lot. Programs crash, it, to some degree, is a way of life. Nothing is perfect and, sometimes, it really is safer just to die, but Opera has given me more random crashes than I care to think about. At least session saving worked well. In conclusion, draw your own. These are my experiences. Download Opera and tell me yours!
  • The Blog

    Life has been hectic as of late. Details will remain sparse, but I plan to get some writing up on this blog soon, but, as the title implies, this blog post is about the blog itself. Part of the reason that I have had gaps between posts here is that, by nature, I am a person with many interests and, while computers are a big one, they are by no means the only one. So, I am going to take a moment to admit what some of them are. Literature. I love literature, poetry, folklore, and mythology. So much so that going back to school to work on an MA and PhD in English literature sounds lovely--except that I don't have $100K to spare. Music. I have recently begun trying to learn guitar. My wife bought me a guitar over a year ago. During the last  few months of her pregnancy with our first son, I lost complete track of this attempt. As far as music  that I like, I like some Christian contemporary (though not much), Celtic in all flavors, some folk, and some oldies rock. Writing. I have a pile of finished or partially finished short stories that I have been writing as well as a partially put together novel manuscript. History, politics, philosophy, and theology. I have an interest in all of these things, though not, perhaps, as deep as some of my other interests. Anyway, I mention it because this blog probably, in order to prevent these times of radio silence, will begin including anecdotes from my other interests. Anyway, stay tuned!
  • Work on the Bootstrapper

    Work's been hectic lately, so I haven't had as much time to work on Ocean, with other responsibilities at home and such. Anyway, here's what's been happening of late. The byte code generator for the bootstrapper is almost done. So, as the ultimate test, I started writing the interpreter. Nothing fancy, nothing interesting (yet). I quickly hit a roadblock. Picking types for fields works pretty well, however, types for functions (methods, if you prefer that parlance) do not. The return type is not inferred at all, which causes an issue when trying to infer types for some variables and the parameter types are also not inferred. The attempts to fix these bugs quickly and easily have resulted in some clumsy code (like, for example, a "junk generator" that is used to generate "throw away" byte code and infer types). So, a decision needs to be made. Here are the ideas I am looking at. Resurrecting the C# interpreter (it was deleted from Subversion a while ago) and using it to interpret the compiler compiling itself and the "real" interpreter. This has a great deal of merit. The C# interpreter was pretty much working, except for a few features that were missing like continuations and macros. This need not be a problem. I do not typically use either feature when writing Scheme code and could probably avoid it. However, it might prove to be easier to use these advanced features in the code for the "real" compiler, as it is planned to include some rather beefy features behind the scenes. This is not a deal breaker for two reasons. First of all, the language the bootstrapper understands does not currently support either of these features, so if the interpreter does not offer them, it is not losing any ground. Continuations could probably be implemented slowly, but accurately in this ethereal "bootstrapping interpreter" without too much extra work. Alternatively, something truly odd (and slow) could be done. The C# interpreter could be used to interpret a fully featured interpreter which itself interprets the compiler compiling itself and the fully featured interpreter. So, the idea would look like this: C# Interpreter -> Scheme Interpreter -> Compiler (as it compiles itself and the Scheme Interpreter). The idea wouldn't be fast when building, but it would allow fully featured code to be used in the compiler itself. If continuations were added to the C# interpreter and macros (through an external library, perhaps) to the Scheme, the work could be split and all the features attained. The competing plan of action is to continue implementing the bootstrap compiler. The compiler itself could use a simple, aggressive form of SUA based on Algorithm 2.1 in the paper "Storage Use Analysis and its Applications" by Manuel Serrano and Marc Feeley (available here). Types could then be inferred and the code cleaned. Continuations could wait until they are built with the final compiler and macros could be added to the interpreter as above so that the picture becomes: C# Compiler -> Scheme Interpreter -> Scheme Compiler  (compiling itself and the interpreter) We could then gain macros, but not continuations, when writing the compiler itself and we could use neither in the Scheme interpreter. The build, however, should be much quicker. What to do? I am leaning heavily towards the latter and will probably start writing some code to see how it works out. I expect macros to be a far more important feature for this type of work than continuations which are, largely, a fancy form of goto. If continuations are used often, it amounts to high-brow spaghetti code, made with bigger and badder spaghetti.
  • Ruminations on Continuations and Ocean

    Continuations have been one of the features that I have been the most anxious to include in the 0.1 release of Ocean. Why? Quite simply because it is not done often enough. Most .NET/JVM implementations of Scheme offer half-hearted or no support for continuations. Scheme is, by nature, a very simple language. It seems almost paradoxical, then, that it has received as much attention as it has from the PL research community. The reason is simple. Its simplicity allows it to be used as a teaching tool (either by using a subset as a model language or by demoing code in Scheme), but it is its power that allows it to be useful to the researcher. The power comes in a few forms. One is the mandated support for full numeric tower. Many languages include library support, but few include it built in. Another is the flexibility that hygienic macros allow. This allows the creation of new constructs in code that would normally have to be included in the compiler (switch statement anyone?). Yet another is in the use of first-class continuations. This power is the difference between Scheme and VB. VB is simple, but weak, but Scheme is simple and powerful. So, one of the goals of Ocean is to ensure that both the simplicity and the power are maintained. I don't want another Scheme-like scripting language. Nothing against them, but I want Scheme. Anyway, now that I have explained why continuations are not a feature that I want to let go of (at least, not without a fight), the question becomes how to implement them. There is no built-in support for continuations in the ECMA 335. Mono has gone ahead and added them anyway. While this is terrific, I want Ocean to be fully compatible, to the maximum extent with all three of the platform implementations of which I am aware: Mono (by far, this one takes priority as it is more platform independent), Microsoft's (the "canonical" one and, by far, the most important to the developer who wants to interact with commercial software), and GNU's (the least important and, despite being almost the same age as Mono, the least mature for "real work"). Because of this, I have been continually snooping around on the web and through academic papers. To date, I have found two possible methods and I am considering them both. The first, is good old fashioned CPS where some sort of custom class (that implements ICloneable) is used represent a continuation. Things could still get sticky with object references that are stored in classes that are bound in the continuation. The second one I found is the more interesting. The paper is available here: http://www.ccs.neu.edu/scheme/pubs/stackhack4.html  There is a lot of heavy lifting to be done, but the essential idea is to transform the programs so that they "build" the continuations themselves, as they execute, where each procedure must be modified to "co-operatively" (in the words of the authors) help to construct the continuation. It is tough. CPS, in this case, would be slower as any reference will have to be tracked down with two "reads" instead of just one. Both seem like they could, potentially, have issues with nested references. The latter looks as though it would have a problem interfacing code compiled with this technique (Ocean, if that is the design route taken vs. code compiled in a C# compiler) according to the paper itself. CPS would not have this problem, however, because a continuation would become first-class in more than one meaning of the word. How are things going to be done? I don't know. A third possibility that I am still considering is waiting until the next version (0.2) to implement continuations. I don't like it, but it would get a useable piece of software released. And that, my friends, is my non conclusion. Hey, there was a reason that this article was dubbed a rumination.
  • Out of Town

    Once again, I am sorry for the air silence, but I've been out of town with relatively little web access (did any of you out there know that there is no WiFi in a barn?). More coming.
  • KDE, aRts, and VMWare

    VMWare Server has the, rather nice, ability to allow you to connect "sound cards" to virtual machines and then play the sound on the host machine. In essence, it tunnels the output to the virtual sound card to the host machine. This is nice for me, as one of my main home uses of VMWare is for use with Napster (the legal one; I never used it as a P2P) and game playing. The last several times I have set up a computer with KDE and aRts, I have had some miscellaneous issues getting sound to run. So, I thought I would document the resolution here. After setting up VMWare, I also install the ESD/aRts wrapper. This is necessary on Gentoo, but I never had to do it separately on Ubuntu. After starting VMWare through the ESD wrapper, you should be able to connect a digital sound device. Under KDE, however, this usually fails initially. This is the nasty little trick you have to remember (that I never do, since I only do it once for each install I do): go to audio options in the KDE Control Center and check "Enable suspend if idle after " and fill in some number of seconds. This is important, because if it is not done, KDE will never release the sound device and VMWare will never be able to acquire it.
  • The Use, Unuse, and Abuse of Scripting

    Ah, good old scripting. The way to quickly automate those obnoxious, drawn out tasks. That is what bash (or zsh, ksh, csh, etc. ad infinitum) is for, it is what Python is for, it is what Ruby is for, it is what Perl is for. At my current job, I get a lot of those little tasks that need to get done, but would be tedious (not to mention very, very long) to do by hand, but are superbly suited for scripting. Fifteen or twenty minutes spent writing the script and I can do other things while the input is getting crunched. I've had times were the script took about fifteen or twenty minutes to write, but I had enough input that it took hours to finish crunching. Yet I continue to be amazed at the number of people who will simply rush in and do the task at hand, spending many tedious hours doing little details without even stopping to wonder if there is a better way. I admit it: I would go insane doing the same thing. These are the unusers. Then there are people on the other end. There are the people who want to write a 3D engine in Perl or a massive enterprise application in Python. These are the abusers. Any and every conceivable task must be done in scripting language XYZ. Heck, there are even wiki engines written in bash. While the line between applications development and scripting has been somewhat blurred, there are still things that it is just not a good idea to script. Anything where performance is key would be a place where you do not want to script or anything that is going to be expected to scale way up. The proper use of scripting, in my every so humble opinion, is the automating of tasks, usually through the gluing together of other components. Personally, I use bash for most of this type of work and, when bash can't do it, Python. For actual development, I usually use C#, Haskell, or Scheme, with a smattering of C/C++. I don't use Perl at all if I can avoid it. Not to upset anyone or anything, but Perl looks like line noise. It is just way too kludgy a way to do things for my taste. Where do the culprits come in? The unusers tend to be in corposphere where the vast majority of the people who get these little tasks don't know anything about programming and hence don't even know that there is a better way. The abusers tend to be the h4x0r types. No sense of larger aesthetic, just quick and dirty get the job done. Sadly, there aren't too many users out there. Many IT staffs are shackled from doing this sort of thing and most of the admins couldn't do it anyway. The places where it tends to get done right, I think, are in individuals' homes where they have free creative lease, acadamia (I had a professor who more or less graded work on scripted unit testing), and *NIX shops.
  • Gentoo -- for the Ubuntu/Fedora/etc Linux user

    When I first used Gentoo, it was after I had been using Fedora Linux already. I saw a great deal of potential in Fedora, but I found it too bloated. I started stripping things out, but I figured then, as I believe now, that it is easier to start minimal and build up then to strip out what is unwanted. The other thing I knew was that RPMs were unacceptable. I was able to break the system way too easily. By nature, I am a tinkerer and I can crash anything once I fiddle at a low enough level, but I was able to break down the packaging system too easily. A couple guys at school told me about Gentoo. So, I started using it. As advertised on its website, it took a long time to build all of the necessary components. Not really knowing what I was doing, I did things the Wrong Way (TM). When the mess got big enough, I switched to Slackware. After doing a bit more reading (and managing to get lost in my own Slackware box), I went with Gentoo. The point here is that my time with Gentoo has been interspersed with Fedora, Slackware, and Ubuntu. In short, I know what it's like to jump into Gentoo and get in over your head. Also like with Linux, I could see the power and potential from the beginning so I hung with it and now I am here to share my wisdom with those who are used to the more friendly side of Linux. First, the thing you have to understand is that Gentoo is, in essence, a semi-automated Linux from scratch. Gentoo's idea of eye candy is having the prompt colored by default (which is cool). As time has gone on, the Gentoo team has tried to introduce an installer written in Python to take some of the pain out of this process. I must say, as much as I wanted to love the installer, last time I tried it I couldn't get it to do jack. It was quicker and easier to go the old-school way. With that in mind, the first thing to do is to sit down and (preferably on paper) write down what you want to be on the machine because Gentoo's philosophy is, as much as possible, if you don't ask for it you ain't getting it. We'll come back to this point shortly. Secondly, USE flags. They are, with portage, the most touted feature of Gentoo and well they should be. They offer unprecedented flexibility and power. "With great power comes great responsibility" a tired old phrase in our Spiderman infested world holds true. Good USE flags are the difference between a fast, stable system and a buggy crashing one. Thirdly, READ THE MANUAL! Like probably every one of you, my first instinct is to think that the stupid manual is overrated and I will figure it out later. Well, you won't get away with it here. Read the manual every step of the way. It'll save your life. So, with that in mind, here are my tips for getting started (note: at least skim the handbook first; this will all make more sense then): Go to the USE flag reference page and read through the list. Add the use flag for any functionality you will want throughout the system. So, for example, if you are setting up a desktop system, you will almost certainly want the X use flag throughout the system. Unless your goal is to experiment with multiple kernels, pick your kernel and install that kernel and ONLY that kernel. Since I was doing all of this on a laptop, I wanted hibernate capabilities, so I needed the Suspend2 kernel sources. In one of my earlier installs, I went with a default not thinking about it and got myself into trouble later when I had multiple packages built against different kernel sources. If you are using genkernel this won't matter as much, but otherwise figure out what you will need in your kernel before configuration. What kinds of power management will you want? Do you want a splash screen available? Do you have wireless cards (even if you use ndiswrapper, you will need to compile some support into the kernel)? What hardware will you be using in general? Have answers to questions like this and you will be able to get things done much cleaner the first time around. Install Gentools. You'll love it. Install eix. Searching for packages and displaying packages is so much faster. Before emerging anything, check the use flags. There will be packages for which you want functionality that is not necessary on the hold or functionality you will want to strip out. Check the use flags first and make adjustments as necessary to packages.use Keep the system up to date. It makes that whole compile from source paradigm less painful. If you do it daily, you will usually have 1-3 packages, usually smaller libraries, that need upgrading. If you do it once every few weeks, you will tend to have a bunch of the smaller builds plus one or two of the bigger ones. Just schedule it and keep the system up to date. Prowl the forums. I admit I don't do this as much as I would like, but you never know when something interesting will pop up. If you have issues, google, the gentoo wiki, and the gentoo forums are your best friends. With a little skilled searching, you will often not even need to post anything yourself. Well, that's about it. I'm sure I'll add more as I think of it.
  • Finished changing...for now.

    I finally threw my hands up in despair. When dealing with Sabayon, I finally got so fed up of trying to strip out what I didn't want (and there was a ton) and the fact that updates get failing for odd reasons, that I gave up and have spent the last few days rebuilding my old machine as good old fashioned Gentoo Linux. Forthcoming on this site will be my quick tips for those who are Linux veterans, but Gentoo n00b's as well as the first published versions of my Lambda Overlay (see the projects page). Forgive the delays (hey, compiling a useful desktop takes time) and stay tuned!
  • The Ideal Distro

    After switching distros a couple of times and writing about it here, it seemed appropriate (especially given what I wrote at the end of the last post) to define what, exactly, I want in a distro. The answer is simple. Relatively simple to get up and running. Gentoo is awesome, FreeBSD is awesome, but they both suffer from the same problem: it takes too long to go from nothing to fully ready and operational desktop. Bear in mind what I am  saying here. It is not that it is too hard or done the wrong way. It just takes too long. Setting up the OS should not itself be a minor hobby. Powerful package management tools. Apt-get/aptitude is inflexible, but stable yum & company is inflexible and not stable, and Gentoo is flexible but a little more flaky. All in all, I prefer Gentoo's system, but the long and short is that I want powerful, flexible package management tools. I don't want a psychic "I'll do it for you" system. I just want good tools. A full system after install. I don't want to have to do a lot of low-level work on the system after installing it. I want to be able to add software and go. Sleek, but not emaciated. Following point 3 above, I want a system that is set up (desktop, splash screens, bootloader, etc.) but not bloated. Fedora is bloated. SuSE is bloated. Heck, Sabayon is bloated and I never thought I'd see a Gentoo-based bloated distro. It's a relatively simple list. Most distros swing too far to one end or the other. Raw Gentoo is a semi-automated Linux from scratch. SuSE is everything and the kitchen sink. I want something that helps me get started, but puts me at the controls thereafter. Of the items on that list, number four is, I think, the only left that could use a bit of explanation. What do I consider "sleek, but not emaciated?" Well, like I indicated above, I want a bootloader, splash screens and a desktop, but what else? I think that the answer should be: GNU autotools/automake--despite all of the calls to make Linux more user-friendly, the only tried and true install method is still ./configure, make, and make install. A good shell (bash) A web browser An office suite A media player A couple of simple games. The problem with the bloated distros is that they usually provide several versions of each. Actually, this is something that I think Ubuntu does fairly well on the whole. It covers the bases, but it doesn't add seven web browsers, four bittorrent clients, three office suites, ad nauseum. That said, I can assure you that MCPLinux will not be entering production anytime soon. Why? Well, I would want to bring something new to the table. You can get what I advocated above with a properly tuned and configured distro. So why go to more work? There are quite enough distros in the world. Moreover, it is a problem that I just don't consider interesting. The whole point of rolling a distro is to do all of the dirty work for someone else. Truth be told, in this arena, I just don't want to do the dirty work. Hopefully someone else will do it. I have my own projects to complete (like Ocean).
  • Time for a Change...Again!

    Well, I mentioned in an earlier blog post that I was switch Gentoo for Ubuntu. Well, I have had enough of Ubuntu. It is a nice, well integrated distro. The problem is that I don't want a nice, well integrated distro. I want one that is pleasant (i.e. has a medium amount of eye candy),. minimal, with a powerful package manager. I guess that's the core of my complaint against Debian and its derivatives. I find the whole apt-get system a little rigid. After poking around a few days ago, I came across Sabayon. Sabayon is a Gentoo-based distro, but it is the quick and easy way to get Gentoo running. Sounded like just my cup of tea, so I grabbed a bittorrent and awaited the arrival of the ISO. Just last night, I burnt it and installed it. So, how was it? The graphical install was painfully bad. It was very well laid out, but it crashed each time through. Finally, I gave up and went with the text installer. All was well. I selected a KDE desktop (farewell, GNOME, forever!) and let 'er rip. It took a while to complete, the reason for which which will soon become clear. I rebooted the system. The splash screens and logins are particularly slick, but when I logged in an OSD covered my screen, started at 100%, counted down to 0 and froze. It refused to go away. The rest of the system was fine. This is after a default install, mind you. I hadn't restored anything or started tinkering with software yet. A bit of trial/error and googling told me that I needed to kill KMilo, the special keypress service. After doing so, the problem went away. After getting passed that hurdle, I soon realized why the install took so long: Sabayon is as bloated as Fedora. I installed one desktop: KDE, but I got all of the "accessories" (small games, utilities, and such) of two: both KDE and GNOME. Why on earth did it install two desktops' utilities, but one desktop? That is besides all of the other software it installed by default. I spent more time removing all the software I didn't want then adding what I did want. Beyond that, the system is nice. Good artwork, Gentoo toolset (is there a better one?), and all. Sabayon seems to be mostly what I've wanted in a distro: quick install, low level eye candy preconfigured, with a down 'n dirty toolset so that I could manage it all myself afterwards. My big complaints were the OSD problem above and just how bloated the system was by default. Coming soon: my fantasy distro and why you probably won't see MCPLinux (mad computer scientist linux) anytime soon.
  • Running Knights of Honor in VMWare

    As is quite evident by the topics and articles on this blog, I am not a fan of Windows. Give me just about any *NIX system, and I'll be happy, but not Windows. Everybody needs to relax, right? Well, every now and again, I enjoy a good video game, preferably strategy or card/board. So, I picked up Knights of Honor (KoH; I also enjoy Rome: Total War, but that won't run under VMWare on my box due to the lack of hardware acceleration) and tried to install it on a VM running Windows XP Professional. So, what's the big deal? You might ask. Nothing much--except that KoH uses some DRM which doesn't always play nicely in its virtualization lockbox. Here, then, are the steps I took to get KoH installed and running: 1. I ripped Disc 1 to an iso (e.g. dd if=/dev/mycdrom of=myiso.iso). From the various boards and such that I have been reading, it appears that Disc 1 has no copy protection built in, whereas disc 2 does. Yet, I tried running disc 1 straight from the drive in VMWare and could not get around the "setup.exe is not a valid win32 application" exceptions. For some reason, of which I am not even sure, ripping the disc and running setup from the ISO works like a charm. Run the setup right up until the point where it asks for Disc 2. 2. Put Disc 2 in the physical drive. "Eject" the ISO and replace it with a reference straight to the physical drive. Make sure that "Enable legacy emulation" is checked. Once again, that last sentence was learned through trial and error. I assume the issues have something to do with the SecurROM DRM that relies on reading some sort of subchannel (no, I don't remember the specifics). 3. Finish the install. If you have problems, ensure that your IDE adapter is set to use DMA whenever possible, rather than PIO, and continue. 4. To run the game, make sure that the IDE adapter is set to use DMA whenever possible, that the virtual CD drive is pointing to your physical one, and that legacy emulation is UNCHECKED. Well, that's what I did. Doesn't sound too hard, does it? Well, if you knew how much time I spent trying random options and surfing DRM cracking boards to figure it out, you might just appreciate why it is that I am posting this (which is, in part, for my own memory and so that I have a record of the steps taken). In the end, this is a perfect example of what bugs me about DRM. I don't mind that some kid in Bangladesh can't get a quick and easy torrent to the studio's work. What I mind is that I, a legal paying customer, have to resort to tricks and games use the software that I PAID for. When I bought KoH, I was buying software from Sunflower, not their hardware/software platform preferences.
  • Users are Luddites

    That's right. Users are Luddites. They just don't want change, not even if it's an improvement. The act of change means adjustment and, as creatures of habit, the last thing we want is change.  There are exceptions, of course, where said users demand change because of just how dire their need. Usually though, you will still hear complaints about certain things "not being like they were before." Few programmers will disagree with this take on things, having been, quite likely, subjected to precisely these sort of things before, so I shall not bother with examples here. Lest anyone think that this is merely an l33t (or someone who regards himself as such) talking down to Joe Point 'n Click, this is not a tendency confined to the average user. Much to the contrary, there are many very avid geeks who feel the same way--the just feel that way on more advanced topics. Take, for example, Perl 6 currently in progress. There has been a ruckus in some quarters about changes being made to the language, particularly changes to the operators. The reason? They don't want a change. One Slashdotter went so far as to call Larry Wall "arrogant" for the said changes. That strikes me has patently absurd. It isn't arrogant for someone to make a change to the language he invented. It IS arrogant for someone else to try and tell that person not to change is own stuff. Moreover, it's OSS. Fork it if you want to. But I digress. Other examples include FreeDOS (originally created because someone didn't want to give up MS-DOS, but now used for other, more useful things--like games), and various APL revival attempts. Users are users and users are Luddites. Add to that that programmers/geeks/admins are users and we get geeks->users->Luddites. Interesting path, no doubt.
  • Professional

    I had a compsci professor who said that we, as computer scientists, are professional problem solvers. True, quite true, but that isn't the most concise way to define the profession. What we really are is professional junk-takers. That's right, folks. I know, I know all professions get this to some degree. No matter what you do, you're going to have to put up with some of it--just more so in an IT related field. The reason is simple: the vast majority of people who use a computer do not understand it. That would be fine, except for they do not think of it as what it is: a tool they do not understand. Most people don't know, really know, how a microwave works, either, but they do understand that it is a tool. Nothing more, nothing less. Sadly, it is not so in the world of computers. The average user thinks of the computer not as a tool, but as a magical mystical artifact and those who harness their power, not as craftsman, but as wizards. Yes, behind our monitors surrounded by a sea of blinking lights we ply our own flavor of black magic. We could muster the world, if we would, and lay it at their feet--but we don't. In their eyes, we simply hide in our dark lairs (whether we do or not), and come up with reasons why we can't do what they wish. We delay, we moan and we dodge. Why don't we just do what they want? Surely it can't be THAT hard? And that's the problem. It IS that hard, but, despite the fact that they do not understand the issue at hand, we just can't make them understand what goes into even the most trivial application they use or the most ubiquitous. I wish I could say that education were the answer. I really do, but it seems to me that every generation gets more technically aware than its predecessor and it doesn't really help because every generation adds its own technical wonder to the world and layers over the wood that already was with marble. The more there is, the less they understand--and the more they assume that we are wizards uttering incantations over a caldron.
  • The School of DIY

    I've been reading some papers on lambda lifting. At its core, lambda lifting is the process of elevating (i.e. lifting) anonymous and inner lambda to the global scope, renaming and revising the program along the way. The end result is that the output will be functionally equivalent to the input, but that it will be more or less scopeless. It's interesting stuff for someone who has never been inside the bowels of a compiler for a functional language before. It does, however, bring a couple of things to mind. First, how hard it can be to find out on your own what is often considered basic in the field you are studying and secondly, how much more insight it can give you into the topic at hand. The real reason that it can be so hard is that when you have a question, you don't know the answer and so you do not know in what form or direction it will come. Take the example above. I understand the concept of lambda lifting and now I am trying to extract a specific enough algorithm to code it in stage 1 of Ocean's compiler, but I didn't start out looking for lambda lifting material. I was reading Guy Steele's paper on Rabbit and the section that reviews the compilation strategy. I figured that, although compiler technology for functional programming had come a long way since then, that it would be an excellent starting place. In that section, Steele casually says that the input Scheme function is alpha renamed. Alpha renaming is simple enough to understand (it is the rule in the lambda calculus that states that changing variable names does not change the fundamental program), but I could find nothing that really aided me in figuring out how to actually implement it as an algorithm. I sat down to code it up myself and realized the daunting complexity.A great deal of wandering finally revealed that lambda lifting was the cure. No doubt, if I had been doing Ocean as a master's thesis I could have asked the advisor or if I had taken a related class I might have already known the answer, but I wasn't. So, it was harder to come by the knowledge. Yet, I am fully convinced that knowledge that was painful to gain is harder to lose. It has a distinct tendency to ingrain itself more fully into the psyche. Besides which is the benefit that a bit of wandering, within the field of study mind you, gave me a great deal more insight or exposure than I would otherwise have had. I love to learn for the sake of learning, but I have a harder time doing that if I cannot focus the energy on something in particular. Long and short, I do go to school--it is the Do It Yourself School, which has a very close working relationship with the School of Hard Knocks. I am certainly not advocating making learning harder than it should be, but I do think that the value of having to, in some fashion, reblaze the path that was cut is often undervalued.
  • Packaging: Debian vs. Gentoo

    If you have read this blog, you know that I have been playing with Debian packaging. I have created packages on both Debian and Gentoo systems and so, here and now, I offer my final manifesto on how they compare. So, what is the difference? Debian packaging basically archives an installed version of the tree (etc/, usr/, usr/bin, etc.) with a couple of text files that describe the package. What it is, what it depends on, etc. Gentoo doesn't really package anything. Ever. Gentoo's Portage system is a network of Python and bash scripting that tells the system where to get the parts for the package, how to configure it, how to build it, and how to install it. There is a deep philosophical divide here. For the package creator, the tasks are almost completely different. In both cases you have to build the package, but in the case of Debian that is almost all you are doing. You are going to build the package, roll the install into one tarball and pass it around. In Gentoo, you are scripting the build for every single user thereafter. You don't give them a finished product, you give them a machine-readable build manual. From the package creator's point of view, it can be a pain in the neck either way. From the user's point of view it's a case of convenience versus flexibility. Debian packages are nice and easy to install. No fuss, no waiting. It's just done. Gentoo's portage system offers endless configurability and flexibility to build your system the way you want it. Ultimately, it comes down to user preference. Do you want the flexibility or do you want it now? There is no right answer here. There are other concerns for the package creator. With Gentoo, you are more or less recording the build process. Sure, you sometimes have to do some tinkering to get it to play nice in the sandbox, buut those times are relatively rare and for good old fashioned autotools software, it is a piece of cake. Maybe it's just me, but Debian seems far more fussy. There has been some software that I just said "oh, heck. I'll just make install." rather than fiddle any more with the package. All in all, I like Gentoo's system better. It is simple, clean, elegant, and flexible. That's not to say it is without caveat, but my experience with it has been smoother than Debian. But, hey, who knows? I may write here in not to long about the glories of deb packages--but I doubt it.
  • Chicken Package--Updated

    In my continued travels on the subject of Debian Packages, I came across a discussion of checkinstall. Basically, the high and low of it is that checkinstall is quick and easy--but the packages will not necessarily work all that well in a clean room environment. This combined with the fact that the paths in the previous package are not quite right, prompted me to build a new package "the right way". It is now available. Before I talk about the solution, I wanted to mention a couple of things that researching this problem brought into greater perspective for me. That out of the way, here is what I did: First, I downloaded and extracted the source. Then I cd'd into the base source directory. Then I ran: $ dh_make -n -e my-email@my-domain.dom This command generates default build scripts for the Debian package (note: you will need to ensure that the package for dh_make is installed). The next step is to edit the control file. This will, by default, be created under debian/control. The main changes needed are to the description and to the dependencies. The documentation on the specifics can be found here. As it is relatively straight forward I will not go into detail on it here. Once you have finished with that, you would need to edit the various rules in the rules file. If, however, your software is fairly standard automake then you probably don't need to do anything. If, on the other hand, you were using NAnt, Ant, HMake, or some other custom build system you would have to modify the rules file to build properly. After all of this is squared away, simply run $ fakeroot debian/rules binary If the previous step was completed correctly, dh_make will spit a shiny new Debian package out in the parent directory. A good old fashioned dpkg run and the package will install on your system.
  • Sorta-Seamless Virtualization

    I'd seen an article before about using VMWare to run Windows programs "seamlessly"--as though they were being run from the user's own desktop. I say sorta because you can try to make the themes and widgets match up, but it just won't always work. Anyway, today I stumbled across another article and so, with little else to do, took the plunge. I won't regurgitate the article. This is the link: https://help.ubuntu.com/community/SeamlessVirtualization Though the one comment I will make is make absolutely sure that you are logged out before trying to launch your app of choice--if you don't, you will get the whole Windows desktop. If I won't regurgitate, then what am I writing about? Well, after the whole thing worked I got to thinking: normally, if I am not using a VM in any way shape or form, I shut it down. I don't have limitless resources on this Dell laptop and those CPU cycles are precious. Wouldn't it be nice if we could boot the VM whenever it is not running but we try to run the program? Why yes, it would and we can do make it happen with VMWare Server's command line tools. I wrote this in Python, as bash does not have a sleep function (which is needed if the VM boots up so that it will wait until the boot process is finished before trying to connect). In all its glory, with some machine-specific redactions, is the nifty little script I wrote: #!/usr/bin/python [privateIP]:3389 -u iuser -p ipass')
  • Haskell as UNIX

    I was continuing work on the tutorial I have mentioned before and one thought occurred to me: Haskell (and functional programming in general) promote a UNIX-ish way of thinking. UNIX's mantra is "Do one thing and do it well." Anyone who has done some functional programming knows, you are, very quickly, forced to break the task down into pieces. Then, you write a function that does one and only one piece. Scheme encourages this style, Haskell all but mandates it. Under the UNIX style, we have a utility (awk/sed or perl, depending on who you ask) to process text files*. We have a utility that dumps a file to stdout (cat), a utility to find files (find), to search text (grep), etc. Indeed, the principle point of shell scripting is to chain multiple utilities together. Somewhere between this similarity, language fanaticism, genius, and mania someone seems to have come up with the idea of marrying the two and the idea of the Haskell shell was born: h4sh. It looks kind of nifty on paper. I don't know that I would write my shell scripts with it, though. The more typical ba/sh seem to work well enough and, honestly, I am having a hard time thinking of anything that you could really do with h4sh that you can't do already. If anything, it's more stylistic. This isn't a criticism or a slam of the work. Interesting idea and I am all for new ideas. Maybe I am just not hardcore enough yet. Is this pattern useful in any way, shape, or form? I am not entirely sure. I suppose one could write a shell for where the difference between user-defined function and a program is practically nil. I could create a function sort (yes, I know that there is a sort utility) and, on the mythical command line, do this: $ cat foo | grep "bar" | sort And get the correct result. In bash, you would have to pas the result of grep as a list to sort (barring some magic of which I am not aware). You could store the function definitions (or byte code compiled) versions in the user's home directory and load it in on startup. Some method to handle potential conflicts between function and program names would have to be put into place. I don't even know that there would be anything useful about this, but it does have kind of a cool, geeky feel to it. Who knows? Perhaps someday I will try to figure out what the ideal shell would look like and create it. Just not today. * At least, that was the intention. GNU Awk now includes such functionality as network ports and perl does everything under the sun including, 3D engines, apparently, according to this gem I'd stumbled across in past travels: http://www.perl.com/pub/a/2004/12/01/3d_engine.html) . What would possess someone to even try a 3D engine in Perl, I have no idea. Still, it all goes to show that feature creep is alive and well in the UNIX world.
  • Chicken Package

    As I was heading out for lunch today, I was thinking about my good old project Latrunculi. I wrote earlier that I was going to put it on hold until I finish Ocean. As Ocean is coming along (at the moment I am rewriting a great deal of the macro expander; the code is coming along cleaner and more elegant than before, but not as quickly as I should long), this could be a while and no wonder. While compilers are not magic, they are not done overnight, either. This leaves Latrunculi hanging. I don't like to leave projects hanging even though I routinely do it. So I decided to do a quick code audit and see how quickly I could push a first release out the door. I went to do a quick rebuild of the source and then remembered: since switching (for how long, we'll see) to Ubuntu, I hadn't installed Chicken Scheme. It is included in the repositories but, like Gentoo, the package was not up to date. The solution was obvious: do what any red-blooded OSS user would do: download the source. The compilation went down without a hitch and as gobbledy-gook scrolled across my screen, I googled the creation of Debian packages. Why? Well, the whole point of a package manager is to manage your packages. As I quickly learned as a Slackware user (my fault, not Slackware's) if you do not do this properly, things can quickly become unmanageable, by man or machine. Towards the build's end, I came across an article on the use of checkinstall (http://www.debian-administration.org/articles/147), a semiautomated method of generating Debian packages. The basics of Debian packages are easy to understand: a couple of control files and tars containing the actual files for the install--that doesn't mean that I felt like doing it by hand. Like all programmers, I am fundamentally lazy when it comes to computers. There are bigger, better things to do than haggle with control files. So I decided to give checkinstall a spin. The usage is trivially simple: after building, issue this command as root: # checkinstall -D installcmd Where the -D flag instructs checkinstall to make a Debian package (instead of an RPM or tgz) and installcmd is the command to run the install (make install, in most cases). I went ahead and generated the Chicken 2.6 package attached to this blog post. By default, checkinstall automatically installs the package after building it. Sure enough, it worked. The one caveat I did hit was when Chicken tried to load a module that I generated from SWIG it failed, unable to find libpcre. I am not sure of the extent to which this is a problem, but here is the fix. As root, run: # ln -s /usr/lib/libpcre /usr/lib/libpcre.so.0 That solved the problem on my box.
  • Your Own Private Database Server with VMWare

    As the title implies (I hope) this post is a quick guide on how to set up a database server in a virtual machine with the aid of VMWare Server. The nice thing about Server is that it runs as a service, so our virtual server can continue running even after we shut down the console. In this example, I will use FreeBSD though you can use anything you want from Linux distro du jour, to OpenBSD, to Solaris, or even to Windows. On top of this fine OS, you will need to run some database system. I can't make an example out of nothing, so I chose PostgreSQL. If you have never heard of it, it is, featurewise, the most advanced OSS RDBMS. What is the motivation behind this little excercise? Well, using VMWare Server in this manner can help with testing so that you can run your apps against a server and get a little closer to real-life. You can also use it to run web apps that you may want to use personally (like a personal bug tracker), but don't want the world to see. Finally, it is an opportunity to try something new if you haven't done it before. So, let's get to it, then! The first thing you will need to do, if you haven't done it already, is install VMWare Server. It is a free download, though it will need to register (again, free) for a serial number. So, if applicable, saunter over to http://www.vmware.com/ and let them be your guide. Done? Okay. Next, we will need to spawn off a new virtual machine. Log into VMWare and, on the home tab, select "Create New Virtual Machine". While your mileage may vary, I chose the following options: OS: Other -> FreeBSD RAM: 160 MB Hard Drive: 8GB, Not allocated, split into 2GB (mandated by the fact that I put the VM on a FAT32 external hard drive) Network Card: NAT (this will change later). and defaults for the rest. The VM we just created is, obviously, completely blank. So, go to http://www.freebsd.org/ and download the disc 1 ISO (we shall not need disc 2 for this tutorial). Once that is finished, go back to the VMWare Server Console. Select the FreeBSD machine, then click VM->Settings. Set the CD drive to point to the ISO you just downloaded. Click OK and power up the virtual machine. The machine should boot straight up to the FreeBSD installer. From here, do a standard FreeBSD install. I will leave the explanation of that to the FreeBSD project's quite good documentation (or, perhaps at a later date, a different tutorial). As an FYI, due to the limited use towards this VM will be put, I elected not to install ports. While useful in general, it is hard drive space wasted here. If you kept it simple, the install should finish pretty quickly. When the prompt shows up asking you if wish to reboot, switch out of the machine (Ctrl+Alt) and switch the CD back to the physical drive so that the VM will not boot to the installer again. Then choose yes. The system will reboot and bring you to a simple login prompt. Login as root with no password. Now the fun begins. The software that we will want on this box is simple: SSH and Postgres. Postgres was the whole point in doing this and having SSH for "remote" administration is both cool and useful. While, in eventuality, we want to make this a private server, we will need to download the software off the web. Keep the networking card set to NAT (or whatever you use to connect directly to the web). Once logged in, you can determine what device equates to your "card" by looking at /var/run/dmesg.boot In the case of my VM (and therefore, probably yours as well), the card was lnc0 and to get an IP run: # dhclient /dev/lnc0 In FreeBSD, there are two "software worlds": ports and packages. Ports is similar to Gentoo's portage (in fact, it is the progenitor of it) in that it is an automated build system. Packages is more similar to a basic Debian or RPM system: it downloads compressed binaries and installs them on the system. For simplicity and speed, I will use packages here. The basic way to add packages is: # pkg_add package.tbz In order for this to work, package.tbz must exist in the current path. Fortunately, for some packages, there is the nifty little short cut # pkg_add -r package Which will both download and install the specified package. Like I said, this doesn't work, out of the box, for everything. So, let's just use it to get what we need to do the rest. Wget is a cool utility that runs on *NIX systems. It can be used to download files from the command line. So, grab it as above: # pkg_add -r wget This will download and install wget in /usr/local/bin. Add this to your path by adding the following to your .cshrc: set PATH=${PATH}":/usr/local/bin" Then run: source .cshrc Now we can use wget in all its glory. Go to FreeBSD's website and go to ports. Search for postgresql. A quick look at the latest version tells us that, in addition to postgresql-server, we will also need gettext, gmake, libiconv, postgresql-client. Fortunately, libiconv and gettext were installed with wget. So, look at the URL for the package and, for each of the remaining, run: # wget URL When done, you should have TBZ's for all of the packages you need. Install gmake, then postgresql-client, and finally postgresql-server. Almost home free, we just need to do some more configuration on both the client side and in VMWare to finish up. First we need to initialize Postgres's various settings. We do this with the following command: # /usr/local/etc/rc.d/postgresql initdb Then we start the server itself: # /usr/local/etc/rc.d/postgresql start We will want Postgres to start up whenever we start the machine, so, using good or not so good old vi add the following line to /etc/rc.conf: postgresql_enable="YES" We will also need to create a database and db users. To create a DB nice and quick, log in as pgsql (a user installed by Postgres) and run # createdb xyz and voila! you have a database you can log in to. Test everything to make sure it is working right, but at this point you should have FreeBSD running PostgreSQL just fine. The next step is to put this on a private subnet and make sure that we can access it from our host system (note: to help test, installing the PostgreSQL client on the host is recommended). Now for the host-only part. Go to VM->Removable Devices->Ethernet Card 1->Edit and change the setting from NAT to Host Only. Then return to the guest machine and rerun dhclient. If successful, the VM will have acquired an IP on the private subnet. In the prompt, enter this command: # ifconfig Your card should be shown on the list with a fresh IP. Next, we need to make sure that PostgreSQL will actually listen on the port. First, edit the file /usr/local/pgsql/data/postgresql.conf. Set the following lines accordingly: listen_addresses = '*' port = 5432 This tells PostgreSQL to listen on all addresses (note that this does not have to be so, it just makes life easier in this scenario) on port 5432 (which is the client's default). Next, edit the file /usr/local/pgsql/data/pg_hba.conf (HBA = Host Based Authentication). This file acts as Postgres's private firewall (kind of). The rules in this file are evaluated to determine whether a given incoming connection will be permitted. I added this line to mine: host all all 0.0.0.0 0.0.0.0 password host refers to any SSL or non SSL port, the second field is the databases to be allowed (which can be set to a delimited list), the users that will be allowed (which again, can be specified) the IP address and mask (setting to 0.0.0.0 allows anything; think of a zero in any given portion of the address as a wildcard), and finally the type of authentication (password sets it to the good old fashioned password-based authentication; PostgreSQL has many options). Finally, go ahead and restart the PostgreSQL server with this command: # /usr/local/etc/rc.d/postgresql restart That's it! Fire up a client on the host's side and run it, specifying the IP address of the VM. A few parting notes. I intentionally set security quite lax. Why? This is a virtual machine, not a real one, first, and secondly the only permitted connections will be off the private host machine. Be much more conscientious if this is for something remotely related to production! Secondly, while this article goes into detail only on the installation of PostgreSQL, the same lessons and principles can be applied to just about anything else. References: FreeBSD Handbook PostgreSQL Manual
  • It was Time for a Change

    Yesterday, for no good reason, I decided it was time for a distro change. This aught to give anyone an idea as to what kind of a guy I am. The system was in perfect working order, chugging along as it had the past year plus, but I couldn't leave well enough alone. I decided it was time to do something new and do it for real. You can try anything out in a VM and get an idea as to what you think of it (which I love), but you'll never be able to develop an idea of the full picture without trying it on your day-to-day work machine. What I Like: The control you get in Gentoo is terrific, but it comes at a price. It is really nice to be able to get the computer up and running in an hour instead of a week. Despite the fact that Ubuntu recognized my wireless card (vanilla, Dell Latitude D505 laptop), I still had to spend some time twiddling with ndiswrapper to get it working correctly. What I don't: Ubuntu's slogan is "Linux for human beings". They do a wonderful job trying to make Linux all warm and fuzzy for the masses, but they seem to have overlooked one thing: that the overwhelming majority of the people using it will still be, initially, geeks and programmers. By default, neither bash nor vim come with coloring turned on. With Vim, the easiest way is to turn it on globally (which is what I want anyway): simply uncomment the line "syntax on" in the file /etc/vim/vimrc. The fact that it IS so easy to turn these things on by default begs the question as to why the are NOT on by default. It isn't like the devs have to go through a massive amount of effort to get this to work and it is a plus for many a geeky user All in all though, it's just another distro. It is Debian as Debian should have been: relatively easy to get installed and relatively up to date. One of the major points for me in giving Ubuntu a spin was to see what all of the fuss was over. Well, I see it and I don't. It's still Debian with a shiny face, but I guess that shiny face to Linux is what a lot of people have been waiting for.
  • 2D Mouse Picking with OpenGL &#038; GLUT

    As I may (or may not) have mentioned, I have been piecing together my own Haskell tutorial that I hope to make available soon (what time is soon? With respect to deadlines, all times are soon). I decided to write the tour de force example program with HOpenGL as I've used OpenGL in the past and, I admit, I was a little ticked when the Haskell School of Expression used the Haskell X11/Win32 overlay. The reason being that because I play with sundry operating systems and programming languages I appreciate lessons that are transferable. If this is coding, I appreciate being able to pick up the bindings in another language and make use of it there, rather than using some languages jerry-rigged utility that I can't use anywhere else. The thing is, on the other project for which I've used OpenGL, I did it in pure 3D, but this example is a simple 2D game. This makes most of the rendering easier (or rather, less verbose), however this combined with the conditions of the game means that I have to redo the mouse picking. I also noticed that there isn't a lot available on this rather arcane condition: 2D mouse picking in HOpenGL with GLUT. So, post-odyssey, here is the mad computer scientist's lab report. OpenGL, being a forward thinking spec, was written with 3D specifically in mind. This is well and good, we like 3D, and, for the most part, it is easy to ignore depth when rendering and get something that is 2D-ish out of the system. GLUT offers a simple event handler for passive motion (i.e. motion when there is no mouse button pressed). In Haskell, the signature is of the form: type MotionCallback = Position -> IO () Position has the form Position x y, where x and y are expressed in pixels. We can set the new "target" equal to these coordinates, with one problem: OpenGL does not measure its coordinate system in pixels (come to think of it, I haven't the foggiest notion what it IS based on; I just had to develop something of a feel for it with practice). So we need to convert from pixels to the OpenGL coordinate system. Here is the finished code that accomplishes this: viewp pm mvm coords@(Vertex3 x1 y1 z1) pm mvm iewp Providing the Z value in Vertex3 as 0 is important for the 2D aspect of this. If we were doing 3D picking, we would either render the scene to the back-buffer with color coding and read the color of the pixel where the mouse was located (the method used in Latrunculi) or we would use unProject twice: once with Z = 0 and the other time with Z = far plane and use these values to create a pick ray. x1 and y1 are the OpenGL coordinates we will need in 2D--with one caveat. The OpenGL axis is inverted relative to X11/Win32 windowing systems. The OpenGL FAQ gives the way to do this as taking the WindowHeight - y1. This assumes the integer form of the OpenGL commands which is not present in the current HOpenGL bindings. What I found works is to invert the y1 coordinate in the following manner: (x1, 0.0 - y1) Which is, admittedly, the same as multiplying y1 by -1. So we literally invert the Y-Coordinate and presto! we have the coordinates we need. As usual, the culprits in figuring things like this are the stupid little things: getting the type signatures correct in the first two lines and finding out (or rather, remembering) that the axis needs to be inverted.
  • Out of Commission

    I was out of commission last week, but the cause was one of the best in the world: my wife just had our firstborn, a son. I just got back to work and I hope to post more quite soon.
  • Working on it!

    No posts lately, as I have been working on a lot of stuff, but it just doesn't seem to finish all that quickly. For example, I have been trying to get the client working on my laptop for my company's VPN. It's almost there. It's stopped throwing errors about MPPE (it wanted to load a kernel, but I built it into the kernel), but still no dice. As a nice side benefit, I upgraded my kernel and deleted a lot of the spare code I had laying in my /usr/src directory. Ocean's macros have made some progress. The past couple of days they were completely broken. Then, today, a breakthrough. Now they are only half-baked, needing the following problems fixed/tested: templates are assumed to be lists (this need not be the case; they could be a symbol, list, or template), the handling of literal identifiers, not allowing improper lists to match against proper lists, and hygiene (not tested). Finally, I have been working on a Haskell tutorial. It is coming along nicely on the whole, with one stumbling block: the pragmatic use of Monads to handle state and imperative-style components. I understand it for the second half. In fact, that part is pretty easy to grasp, but the use of monads and monad transformers is trickier. I understand the concept of layering them. I just don't understand what they are. It's one of those I 75% understand situations. I just need to fill in that last 25%. In some ways, Haskell is closer to being a functional language that is ready for steady applications programming than is Scheme. GHC's performance is pretty darn good at this point, so that whole can of worms is pretty much under control. Moreover, it has better library support at this point with work being done on several widget sets (gtk2hs looks pretty good, but I haven't taken it for a spin yet), SDL, and OpenGL. Many Scheme implementations have bindings to OpenGL, but few have SDL or GTK support that is anything better than alpha (including my favorite, Chicken). The biggest obstacle is that, despite many tutorials on the web, monads and monadic effects are a little tough to find good, low-level material on. I am close to becoming a Haskell convert. I just need to figure those monads out.
  • Services vs. Products

    Microsoft, IT's biggest player, makes its money by selling stuff and it dabbles in services. They want you to buy Vista, Office 2007, CRM, the XBox, etc. On the side, they offer crummy overpriced support. The interesting thing about this is that the industry is moving more and more towards services. Red Hat, IBM, and Novell have pretty much bet their futures on OSS software in which they sell relatively little, but provide many services. Even Oracle, who have been selling Oracle for a long time, are getting into the services business with their recent "Unbreakable Linux" offering. Microsoft has resisted this, but even they have been forced to acknowledge the value of giving some software away (hence the "Express" editions of Visual Studio and SQL Server). This raises in an interesting question: what would happen if Microsoft moved towards a services oriented business model without moving towards open source? If any schnook could, legally, get on the internet and download Windows XP Home and Office 2007? Under this hypothetical model, Microsoft would sell business-oriented extensions to Windows and Office, but give fully featured versions away for any use, then charged for the support and custom editions/additions. What would happen? Would this be a good idea for them? It would partially diffuse a lot of the advantage behind switching to Linux. Honestly, I think it would be hard but it may become fact some day. Even smaller software houses often make more money off of software maintenance than the software itself. As a plus, it is a lot less menacing then "leasing" software.
  • 5

    Today I more or less got stuck writing, or rather updating, documentation. It is a slow, grinding, maddening, thankless task, more concerned with the details of page layout (to bold or not to bold; that is the question. Whether 'tis nobler to suffer the look of undecorated text upon mine eyes, or...) than with any genuine content. So, the Mad Computer Scientist, in an effort to remain sane or, perhaps more accurately, to remain insane in the same way and at the same level as before (i.e. no Uzi mania today), has taken to the safe haven of compiler R&D in between bouts of homicidal fantasies about users (as it requires no level of intelligence to work on documentation; indeed, the ability to shut the brain down would make the task easier). I have been reading Guy Steele's paper on RABBIT, a Scheme compiler written in MacLISP. There is a gap between functional and semi-functional languages (ala Scheme, ML, Haskell, and LISP) and procedural ones (like C, C++, C#, Java, VB, etc.). While most people who used procedural languages first have a difficult time wrapping their brains around functional programming (hereafter to be often abbreviated to FP) the first time around, it actually offers a great deal more flexibility than procedural languages. The world, however, is fraught with procedural languages and, at their core, the computers we use every day are procedural. Ultimately, in sheer pragmatic terms, whether directly or through an intermediary, FP must be translated into the more rigid procedural model. As I get closer to having a design worth implementing (which I should get to soon; macros are almost ready), I'll post the implementation plan here.
  • Macros

    I was examining my TODO list for Ocean (which is of decent length and, my being all for technology and all, is kept entirely on paper) and decided I had to pick the next task to really go at. Numeric tower support is almost done and is largely a matter of detail which, while important, needs to be broken up for the sake of keeping me awake and sane (so help me if I have to write another override operator function...). In the end, I selected to start work implementing macros. It was an arbitrary choice, but one that makes sense in light of the research I'd been doing on Scheme compilation. As I read and reread the pages from the current (R5.92RS) draft, a few thoughts occurred to me. The most obvious was that I was here preparing to implement a feature in Scheme that I had not used to date. I have used macros sparingly in C/C++, but never in Scheme. I just never really saw a reason in the things I was doing with it. The second thought was that Scheme's macros seem to hold almost as much worth, if not more, for the language implementor. It seems like an interesting thought, doesn't it? Macros are expanded prior to interpretation/compilation (as the case may be), so they can be used in the process of interpretation/compilation--and that is what many systems (including Ocean, when it gets that far) do: define a core set of forms and everything else as default macros based on those forms. It's an interesting exercise to try and think of ways to rewrite common Scheme constructs as macros. A simple macro for the let form could be: (let ((x 1) (y 2)) ) -> ((lambda (x y) ) 1 2) Interesting, huh?
  • (display "Hello, world!")

    To use the Scheme code for a cliched intro to programming. If you are bored enough to want to know who I am, go to the "about" page (which should be up shortly if it isn't now). As this is the inaugural post to an inauspicious blog by an inauspicious person, this first post will be about this blog's manifesto. 1. Computer science and techniques are interesting in and of themselves. You can find pragmatic snippets of VB code all over the internet, but that is not really the point here. Here, I intend to post questions and thoughts regarding the more theoretical aspects of compsci as well as thoughts and results from various experiments. I am an experimenter by nature, and I will try things for no better reason than to try them. In short, I believe firmly in learning through discovery (another name I thought of for this blog was "For the Heck of It"). In addition, my various coding projects also act as laboratories for ideas and curios. I will post with regard to them. Code snippets are far more likely to be in bash, Scheme or Haskell, then anything else. 2. Theory is good, but in order for it to be truly useful it must be applied. Virtualization theories are little good if they cannot be realized in superior virtualization software. Programming languages are useless unless they can be useful. A toy language is either a hobby or the spawn of a grant/academic paper. Toys may grow, but if they stay toys they will be forgotten. 3. Zealotry is a negative for the advancement of technology. There are Linux fanatics, Mac fanatics, Windows fanatics, Haskell fanatics, Scheme fanatics, functional programming fanatics, ad infinitum. There will always be different ways of getting tasks done, but rather than getting overly-hung up in one camp, it is far more useful to experiment and then draw conclusions from those experiments. As said in point 1, I will post with regard to my various pet projects from time to time (in fact, it will probably be pretty regular). My main projects at the moment are the following: 1. Ocean - Scheme for .NET (http://oceanscm.sourceforge.net/) I have not yet put up a SourceForge web site for this yet, but the essential idea is this: I like Scheme. I like it a lot. It's neat (as long as you don't start using those stupid []'s) and very elegant. Then I tried to create Latrunculi (see item 2) and I hit on a snag: as nice as Scheme is, the libraries are bare to the extreme. If Python is the "batteries included" language, then Scheme is the "make-your-own-batteries" language. About this time the first draft for the R6RS came out. It standardized a lot of the idiosyncrasies between Scheme implementations and one of my favorite features to be added was standardized byte vectors. So, I decided to write myself a Scheme -- and do it natively on the .NET framework. This would kill two birds with one stone: I would get a jump on the latest Scheme and would give me instantaneous access to a wealth of libraries which is still growing (and probably will be for the forseeable future). 2. Latrunculi (http://latrunculi.sourceforge.net/) - a nice, OpenGL version of an ancient Roman board came, something like Chess or Checkers, but more similar to the game of Henafatafl. Started one boring day at work to get a chance to implement the MiniMax (though I later switched to the more elegant NegaMax) algorithm and grew from there.