September 30, 2004

Picking a Winner

One of the things that has often fascinated me is the way in which given kinds of technologies (and fads in general) spread throughout a population, a metric that I have often used as a way of gauging which technologies have potentially long staying power. This holds more than academic interest for me - as a writer of books, I often am trying to find those ideas, those memes, that will likely end up being popular enough that I can sell many copies of a book by the time it gets to market. Even if you are a writer of fiction, being able to spot these trends gives you a chance to get a book to market with a topic that's getting "hot", and also helps you to determine when a technology is "oversold" and consequently may be in danger of sitting on the shelves.

To that extent, I thought I'd dedicate my column today to a loose set of "rules" (observations really) that I use in looking into my crystal ball. Admittedly, what works for me not work for you, but I think most of these are pretty much common sense.

  1. Open Standards Trump Closed Ones. Standards provide more than "playground principles" for getting people to play in a consistent manner - they also provide a way for smaller players to both avoid a lot of the conceptual research necessary to best describe a given technology, as well as insure that their product will have a market for what it produces. As standards typically form the basis for the way a given set of intellectual property evolves, open standards can also head off potential legal battles down the road.

  2. Dead Tech Reincarnates in Software. Remember the drubbings that push channels, network computers, Hypercard, animated agents, and so forth received as they were pummelled into oblivion. Push lives on in spirit as Atom (nee RSS), network computers have morphed into cell phones and Blackberries, andHypercard was one of the big driving force behind the World Wide Web. If I was a betting man (and I am) I would wager that animated agents will come back into play very shortly as well via XML driven intelligent interfaces.

    One of the things that causes a technology to fail is that the technology becomes used first as a vehicle for promoting advertisements. Often the ideas themselves were pretty good (channels for instance), but because channels very quickly became associated with companies piping their ads to your inwilling desktop, push marketing became a term of loathing for most people. However, when people went back and looked at what made the technology possible (in this case the syndication offered by RSS) they often find novel uses for it, such as Weblogs. I keep an eye on any number of failed ideas and try to figure out whether the failure was due to the concept or to its execution, and if it looks like the latter, I'll put out a watch on the web and elsewhere to see if the core tech is showing back up.

  3. Watch the Novice Developers. In the early 1990s, Microsoft came up with an exceptionally innovative product called Visual Basic. It combined an easy to use computer language with the ability to create native applications quickly in its core Windows operating system. Up until that time, Windows was not quite a slam dunk - it was fragile, was competing with other graphical interfaces (such as the Macintosh) and had a relatively small developer base made up of mid-level programmers working within C++. With VB, however, a whole new cohort of power users joined the ranks of "programmers" and there was a huge surge in the development community. Microsoft benefitted by this because it meant that they had, seemingly overnight, a whole forest of applications being developed for it, which of course strengthened the attraction to using this particular OS.

    However, starting in about 1997, several things changed. Microsoft shifted their focus away from the development community and toward the higher level IT managers, most of whom did not interact with the systems day to day. In the short term, this strategy paid off, as it sold more higher end server boxes. In the long term, though, Microsoft may have eaten its seed corn. Linux and the BSDs began to jump into prominence, and Java (and later XML) made it easier for developers to migrate off of the Microsoft ship. Suddenly, people were porting away from Microsoft in record numbers, driven mostly by the next cadre of young developers who cut their teeth on these new quasi-Unix systems.

    Young developers create software, and the most innovative software usually does not start in the boardroom but in the garage office. To see what things will be hot tomorrow, look at what the most common apps in beta are at Sourceforge or Fresh Meat. Chances are that the applications being development there by themselves will not be the ones that are hot, but applications like those probably will be.

  4. Ignore the Natterings of the Press. I've worked for a number of tech magazines. They live and die off advertising. The mainstream tech reporters know that market well, and expend huge amounts of ink on the doings of Microsoft, IBM, Sun, Hewlet-Packard, Oracle, and so forth. These companies strip whole forests bare to get press releases out to the magazines, and the magazines oblige by writing good reviews about products in hopes of getting advertising money. Some are more objective about the news than others, but given tight deadlines and constrained budgets, the jump from press release to article is usually not far. As press releases are essentially advertisements indicating that a given product has reached (or is nearing) completion, it means that someone there must have been at least a year ahead of you at that stage.

    By the time it hits eWeek, information about trends is stale -- more or less. If you are in investor looking at getting into a given technology in the market, the magazines can give you a pretty good idea about what is beginning to bubble to the surface, but if you are a developer looking to catch the next big wave, or a writer looking to get his book pulblished in time for the market to take off, then you're probably late for the party once it makes the headlines.

  5. Solutions in Search of Problems. One of the central questions I always have about a given technology is whether it is fufills an obvious need or whether it is a solution in search of a problem. "I need to make lots of money by selling this software" is not a "problem," though I am sometimes astonished at how many otherwise sane people believe otherwise. Visual Basic made it possible for people who otherwise didn't know the first thing about programming to write serviceable programs. In more contemporary terms, SVG is beginning to take off not because it has any immediate demonstrable impact in presentation graphics (ala Flash or Powerpoint) but because it makes intelligent graphics such as self-aware maps possible, and as people become more conversant with it, they are adapting it for other things.

    When evaluating a technology, the first thing you must ask is "How will this help me?" If, after some time spent looking at it you cannot formulate several different one paragraph answers, then the technology is not worth you investing your time on it. I will typically role play different potential users when looking at something new, and I figure if I can't see any obvious advantages in that role play session, then most other people won't either.

  6. Coolness is a factor. There have been a few technologies that I've looked at and immediately responded, "That's cool!" This isn't necessarily due to eye candy - if it makes it possible for you to conceive of things that you absolutely want to do with it right now, even if it isn't quite ready for prime time yet, then you've stumbled on something that will likely be huge down the road. Additionally, this does not necesssarily have to be within the realm of computer technology; I read science and technology magazines extensively, looking not necessarily for what the latest gadgets or ideas are, but instead looking at what things a given innovation (say Bayesian computing) could open up down the road.

  7. Know Your Own Limitations. I've worked with web technologies for eleven years, and XML technologies for eight. I know these areas very well, especially in the context of user interfaces. I know less about databases, though I have worked with them sporadically over the last several years, and I specifically disavow domain knowledge there. This is not to say that you should not try to learn as much about technologies that touch on your own area of expertise as possible (that's the way you learn more, after all) but you should put a confidence factor into any assessment you make based upon how far it is from your core competencies.

  8. Watch the Edges. The most significant innovations do not occur well within the established body of a given area of knowledge. They occur at the boundaries between two otherwise disparate fields. Most new technologies come from people realizing that an often well established solution or methodology in one field can be adapted for use in a different one. Often that technology is a "bridge" a means of translating core concepts back and forth between the two domains -- such as the use of Analog to Digital to Analog subsytems used in modern car engines that originally had their foundation in sampling music. To that extent, seeing the metaphorical similarities between two systems can often let you apply expensively funded innovation in one area to work much less expensively in others.

  9. Think Systemically. Progress is not linear. It branches and weaves, ebbs and flows, sometimes appearing to stagnate and then get a sudden burst of energy to seemingly come out of nowhere. When one particular technology (or company) dominates, it usually ends up creating back-pressure specific to that technology. In the absence of other factors, these usually balance out, but eventually some change in the technological environment will cause the two dominant technologies to shift with dizzying speeds. As a realpolitik example of this, several countries in the former Warsaw pact have undergone significant improvements over the years, making them competitive in a manner well outside of their relative population side. They've leaped ahead of much larger economies in areas such as telecommunication because they effectively found it more cost effective to scrap decades old infrastructure and start from scratch, gaining the advantage of new technologies without having to spend the time and money needed to develop the intervening stages. Look for factors like this when deciding where a break-out company may come from.

  10. Take Time to Think. In our go-go world, there is a tendency to feel that if you are not at your desk ten hours a day, six days a week, then you are failing (and may even be considered a thief for taking money for work during those unoccupied moments.). Yet I find that by setting aside a period of an hour or so a day for "research", finding out about new technologies, working through patterns, doing something besides sitting and coding, you will generally be able to create better code with more applicability, will be able to better foresee the requirements that you will have, and maybe even be able to see ways around particularly thorny problems. Most of my best insights came to me while I was "taking a break" in this manner.

    I've long suspected that many of the problems that plague us today stem from a lack of reflection on the part of others in the past. Sometimes the best thing you can do is just turn off the TV, shut down the radio, and go take a walk.
I'd be interested in hearing about things that you do to help you "see the future". Until next time, enjoy!

-- Kurt Cagle

September 27, 2004

How "Widgety" should SVG Get?

Widgets have an interesting pedigree. Originally used as a humorous term for a small machine of some sort in a manufacturing process, widgets have made their way into the programming lexicon as a short-hand term for components used in software application development. These components are generally lightweight and are very seldom stand-alone, serving instead as some kind of snap-together Legos* to make application development go faster.

HTML has long had a staple of seven or so such widgets for building web forms, components that are fairly primitive and limited in their capability, but that serve to acquire enough information to make them both easily implementable on about any platform and basic enough to minimize cross-browser differences.

XUL, covered in my previous post, is a larger (albeit still finite) collection of widgets that provide enough flexibility to build what we consider traditional desktop applications. What differentiates XUL from HTML, however, is the presence of another layer, the XML Binding Language (XBL) that provides a set of interfaces for defining new Widgets using a largely XML based framework. XBL can both extend the functionality of existing XUL widgets (and HTML widgets, for that matter) and can define new ones from scratch.

This extensibility mechanism is something that developers should think hard about. Limited component architectures can only carry you so far. Eventually, you come across a need for a component that can't readily be built within the existing widget framework, or that needs to be skinned in a different way than the current set can handle. XBL provides a way of building not only structure but data structres, attaching (or removing) events, and metadata into new items introduced into any given namespace. In simple terms, this means that you can keep your core presentation language (be it XHTML, XUL, or SVG) as focused on its task as necessary, while adding in a reusable set of intermediate components that push into the application layer.

SVG is already an extraordinarily large and complex namespace, one that is proving difficult for many vendors to implement in its entirety. To add into this the requirement for additional core widgets puts two forces against one another that should be cooperating - the vendor community trying to put together decent workable implentations and customers/developers who want to see more functionality at the core level.

When talking about XML technologies, I've often differentiated between foundational schemas - ones that effectively define the toolkits that people use to make the web - and application schemas (the languages that use these primitives to build something useful). SVG is a foundational schema, perhaps even more so than XHTML is. In theory, you could replace XHTML with SVG (though it would entail a huge amount of SVG to do it), you couldn't replaces SVG with XHTML, however. XUL isn't foundational, though it isn't quite at the application level, either - it falls somewhere in between. Again, however, it is possible to create XUL with SVG (not necessarily practical, but possible) while the opposite isn't true.

And this is consequently where the crux of the debate about SVG widgets comes from. It is possible to create an editable text box in SVG (1.2), even if there wasn't an editable attribute in the spec itself. The attribute exists because while it is possible to build this kind of editor with SVG, the overhead of doing this in script is prohibitive (I speak from personal experience here). Beyond this one particular component, however, it is safe to say that most widgets could readily be created by some combination of SVG elements. This is the prevalent view that is spurring the recent developments in sXBL. Rather than establishing this additional set of widgets (an approach taken by the now defunct Corel SVG efforts in creating dSVG), sXBL takes the approach that components can be built using an XML language that would be far more flexible than any limited widget set.

XBL is far from perfect. I am dismayed by the overreliance in the original Mozilla implementation of XBL of RDF, which, while suitable for some applications, adds a layer of complexity to building apps that many developers are uncomfortable taking. I find RDF confusing, and I've worked with it on and off for the last four years. I'd also prefer to see XPath rather than CSS being used as the selector language, for reasons I've outlined before. However, all of this pales in comparison to the obvious benefits that an XBL of any sort provides.

One of the other facets that I think will become a much bigger factor within two-to-three years is the idea of incorporating SVG as an equal partner within the DOM. Adobe's SVG viewer is a powerful application, but so long as it is limited to being "in the box" it's utility will always be more for stand-alone applications rather than components, for memory reasons if nothing else. ASV used as a behavior for defining SVG code directly within a web page has much more potential for making SVG a foundation for Widget development, especially once you have an XBL-like mechanism to create SVG as shadow-entities of other XML constructs. I've played with this some in the Mozilla SVG builds, and there's more than a little power in being able to reference an SVG group in exactly the same way that you do an HTML element.

Right now, we tend to pre-impose what we know of established functionality on our ideas of what can be done with such technologies, but realistically, once you can break out of the box (one of the biggest advantages that SVG promises, when you get down to it) there will be a whole generation of web designers who will see existing web applications as being staid and dull. We need to get to that point, first, but it's coming.

-- Kurt Cagle

September 25, 2004


XUL Dialog sample from Who ya gonna call? XUL! Posted by Hello

September 24, 2004

Who ya gonna call? XUL!!

I seem to take a trenchant delight in working with languages and technologies that people have written off as being "dead". The XML User-interface Language (XUL) is one of those. In the long, slow decline of Mozilla from dominant browser to a technological backwater, an intriguing idea seemed to die with it. The idea was simple:

Create an XML based language that would describe a set of more complex interface components than those that shipped with the core HTML set. Use this as the foundation of an application framework, something that you could use to create such things as, well, web browsers. Call it the XML User-interface Language (or XUL, because it sounds very much like a demon out of Ghostbusters), and make it freely available.

This was a good concept, but Mozilla had long since lost its edge, Internet Explorer had become the dominant platform for web applications, and anyway, no one was really doing anything with XML on the browser anymore. And then something curious happened. The Mozilla team didn't give up. They kept pushing forward on new technologies, basing them largely on the W3C standards, though occasionally borrowing an idea from Internet Explorer that seemed pretty decent. They also worked to keep up with all of the major platforms (one of the beauties of code abstraction with XML), so that their browsers began to get better, incrementally at first, then faster in the last few years. XUL grew as well, as new components were integrated into the library, and the pieces began to play better with one another.

Last month, Firefox (a light-weight version of Mozilla 1.7) was announced as a release candidate, a near-final version of the application that would let people play with it in depth. As a web browser, it was pretty cool -- fast (though not as fast as it probably will be), easily extensible and skinnable, with a nod to the dominance of Google by the presence of a Google bar and a GMail extension, to RSS feeds by integrated webfeed support, and to XML with estensive XML and RDF functionality built-in (not to mention much more complete CSS support). This combination by itself was enough to make Firefox an intriguing prospect for me when I was looking at building a content-management-system client. I began to port over my code from Internet Explorer, surprised at the relatively minimal pain in doing so, but what I was developing was very much a traditional web page application. However, as I was reviewing the documentation, I kept coming across references to the foundation set used by Firefox ... XUL.

Okay, I have an admission to make. I'm not very good with C++ programming, at least as far as building windowed applications go. Oh, I understand the whole concept of pointers and references, templates make a certain amount of sense to me, and I can generally follow C++ code without a lot of effort, but I found the the whole reference counting, interface querying thing to be entirely too low-level for what I wanted to do - I may as well have been programming in assembler, for the amount of work involved. Perhaps that's why I gravitated to XML in the first place: I liked the notion of being able to abstract the pieces of an application, getting away from the routine and fairly ugly low-level programming that to me seemed should have readily been handleable by a decent compiler. Visual Basic was a big first step in this direction, but it took entirely too long for Microsoft to acknowledge the fact that VB was not a toy (and hence make it easier to access low level operations when need be). Ironically, while the revamping of the underlying interface description language (IDL) into the CLI (Common Language Interface), Microsoft walked away from the simplicity of VB, the thing that in fact made it appealing to beginning (and advanced but harried) programmers in the first place, in order to create VB.NET, which has all of the painful quirks of VB with little of the underlying simplicity.

XUL reminded me a lot of Visual Basic, but in a 2000's XML-ish sort of way. You can create web components with XUL (as I'll show shortly) but XUL ultimately is about application development - creating applications like web browsers, e-mail readers and editors, or even more staid applications such as sales force tools, systems monitors, accounting sofware, and so forth. It has a lot of the features that have become standard in application frameworks, such as layout tools with flexors that resize automatically as the page does, multi-column grid and list elements, toolbars and buttons, fully supported multi-layer menus and menu-popups, and one of the snazziest HTML editors I've ever seen, in or out of an application - and the beauty of all of this is that this capability is sitting within one of the best web browsers on the planet, meaning that once you complete your application, you can make it available as an extension to Firefox itself.

To illustrate the power of this, the following screen shot shows a XUL application I wrote for editing web content:


Screen shot of application built using XUL toolkit. Posted by Hello

I didn't write the CNN site, by the way, though I did select the HTML content and drag it into the HTML editor en-mass, something I thought was incredibly cool. It took me about three days to write the app (though I'll admit that I'd done a lot of the groundwork before) and this while I was still learning the basic XUL API. At the moment I'm working at improving it, tying it into the CMS back end that I worked out, but the important thing to consider is that this is a "web application" - no C++, no VB, just a simple XML document and a block of Javascript code.

Actually, more properly, this is an extension. It is downloaded like any of dozens, and likely soon to be hundreds, if not thousands, of other extensions from the Mozilla site (though because of the nature of this application I can't release this version publicly .... working on an open source version, however). It becomes a part of Firefox, something called up from the Tool menu. Because it is part of the browser, this means that it is also within the secure local space, meaning that you can use this toolkit to create sophisticated, multi-platform applications that can be easily downloaded and updated.

This is not all that ideal for a generalized Internet scenario. It is, however, wonderful for the development of intranet (corporate wide) applications. These are the applications that frequently cause some of the biggest headaches for developers, who have to balance between the world of stand-alone applications and the limitations inherent in HTML web-portal applications. If, instead, you can straddle both worlds with an application framework that practically lives on the web, the potential is pretty much endless.

So what does this alien language look like? Here's a sample:

<window
id="findfile-window"
title="Find Files"
orient="horizontal"
xmlns='http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul'>

<vbox flex="1">

<description>
Enter your search criteria below and select the Find button to begin
the search.
</description>

<spacer style="height: 10px">

<groupbox orient="horizontal">
<caption label="Search Criteria">
<menulist id="searchtype">
<menupopup>
<menuitem label="Name">
<menuitem label="Size">
<menuitem label="Date Modified">
</menupopup>
</menulist>
<spacer style="width: 10px;">
<menulist id="searchmode">
<menupopup>
<menuitem label="Is">
<menuitem label="Is Not">
</menupopup>
</menulist>
<spacer style="width: 10px;">
<textbox id="find-text" flex="1" style="min-width: 15em;">
</groupbox>

<spacer style="height: 10px">

<hbox>
<spacer flex="1">
<button id="find-button" label="Find" default="true">
<button id="cancel-button" label="Cancel">
</hbox>

</vbox>

</window>

This is not the code for the editor, but rather for a dialog box. The root element for XUL is the <window> element, which serves the same purpose as <html> does for HTML. The other elements, including group boxes, menus, buttons and so forth, define specific widgets. For instance, the <button> element defines a button, including it's label, identifier, and default state.

The flex attribute is quite useful. It defines the degree to which the element will attempt to fill up the available space. A flex of 0 indicates that the element will take up the minimal space defined by it's default size, or by it's CSS indicated size if available. On the other hand, a flex of 1 indicates that the element will attempt to expand until other elements push back on it (for instance, a multi-line textbox will attempt to fill the entire window if it's flex is 1, stopped only by other elements. Two or more elements with the same flex will divide the space in half, while two elements of flex 1 and 2 respectively will each take up 1/3 and 2/3 of the available space as appropriate. This simple-seeming innovation can radically cut down on the amount of "resize" code that you have to write.

XUL applications can be run either from a user's local machine (through an RDF based configuration file) or it can be run from the web. The latter case puts some major restrictions of local file access, however, but other than that can work fine. You can embed XUL elements within HTML (for Firefox browsers only, of course) by using XUL's namespace:

xmlns='http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul'

or you can create XUL applications with XHTML interspersed. Typically, this will require that you modify your server software so that it can legitimately serve up XUL documents recognizable to firefox. In IIS, this would be done by adding the XUL mime-type:

application/vnd.mozilla.xul+xml

to the list of mime-types that IIS is aware of (select the properties pop-up menu item for your server in the IIS control panel, select the Mime-Types button, then add the appropriate associations for .xul as application/vnd.mozilla.xul+xml. With Apache, you add it to the associated mime-types by editing the httpd.conf file.

XUL Resurgent

Interest in XUL has picked up dramatically ever since Firefox came into its own as a separate application, and the combination of ease of use, cross-platform compatibility, intrinsic web awareness and a growing community of developers makes it a technology to watch.

One problem that XUL faces is that the documentation out of the Mozilla site is sketchy at best. However, XULPlanet at http://www.xulplanet.com has several good XUL tutorials and reference documents that are up to date, if not necessarily complete or exhaustive. I would strongly recommend if you're interested in XUL that you check out that treasure trove first.

XUL is, in many ways, analogous to Microsoft's XAML, and with the recent work on the part of the SVG community making its way into the Mozilla effort, I see XUL and XAML effectively going head to head as the first of a new breed of meta-application development languages. XAML is perhaps more exhaustive, but until the Mono effort gets further along, it is not as well represented on non-Microsoft platforms. Additionally, XAML puts a lot more of its emphasis on the use of C# code-behinds, an approach that makes sense for very large enterprise developers but seems like overkill for the average programmer.

XUL may not be your cup of tea, but you owe it to yourself to at least check it out ... as an application framework, it may very well become one of the big technology stories of 2005.

-- Kurt Cagle

September 23, 2004

The Era of the Mega-Storm

On a completely non-computer related note (there's been a lot of fodder for thought today), I noticed a headline that Ivan is coming back. Yeah, you read that right. Hurricane Ivan, one of the most lethal storms to hit the East Coast in a century, was pushed back out to the Atlantic, where it caught a cross stream, headed back into the Carribean, gathered strength, and seems about ready to pound the Gulf Coast as a tropical storm - oh, and the remnants of Hurrican Jeane seem poised to do the same thing.

Global warming is often taken by those unfamiliar with complex systems as a gradual warming uniformly across the world - temperatures go up by one or two or three degrees, which pushes summer up a little bit. However, this assumption can be dangerously naive. Global Warming is actually causing cooling to occur in certain portions of the world, even as the average temperature increases. What changes is something more important - the amount of energy in the system.

Complex systems are not random ones. Rather, they form regions of quasi-stability, where a particular whether pattern become apparent. Until fairly recently, the weather patterns in the Gulf of Mexico have had a certain pattern - storms would form in the Atlantic (largely because of heat, sand and contaminants picked up in the growing Sahara desert) and would move into the Caribbean, where the reasonably warm Caribbean water created a thermal gradient called a convection cell, in which cold air pushed down into a column of cold air, hit the warm Caribbean water, expanded outward as it was heated, rose in a donut shape back into the upper troposphere, where it then cools again and the pattern repeats. Because of Coriolis effects, this also begins spinning, and as more energy moves through the torus, the spin becomes faster. This is the basis of all cyclonic weather systems (i.e., most storms).

Once that Cyclone hits land, the energy that sustained it (the warm water) disappears, and the storm begins to lose energy. Usually, by the time it makes it way more than a few hundred miles, the storm has lost enough energy that it can't maintain the central column of air, and the storm dissipates. Usually. But not this time. This time, Ivan had SO MUCH energy, due to the extraordinary warmth of the Carribean, that it managed to maintain its cohesiveness even after making its way up to Pennsylvania, and once over the (warming) Atlantic, was able to reconstitute itself.

The Earth rotates, and that rotation in turn drags on storms and moves them into large circulation systems. Because most storms in the past haven't had that much energy, they fall apart far before they are significantly affected by the effect. Yet somewhere along the line, we may have entered into a new regime, a new semi-stable weather pattern, which is most ominous. If cyclones have enough energy to make a complete circuit of the Carribean circulation cell, then you have the possibility of a storm becoming a "repeat offender", persisting as a distinct entity for two, three, or four passes before entropy finally tears them apart.

What's worse, as storms re-enter the Carribbean, there is an increasing likelihood that a new storm can "cannibalize" the remaining energy of an older storm, with the effect that a Category 3 Storm gets bumped to Category 4 or 5 with regular frequency, and the likelihood increases of hypothetical Category 6, which would, by extension, have sustained winds in excess of 180 mph. To put this into perspective, Ivan's "sustained" winds topped out at 165 mph, but there were gusts within the storm that well exceeded 210 mph. A category 6 storm would have gusts in excess of 230 mph, which puts it into the regime of damage caused by a tornado (not to mention that storms of this magnetude would be spawning tornados by the dozens as such high "gusts" would be subject to incredible cyclonic pressures).

Average global temperatures have risen roughly 1 degree Celcius in the last twenty years. If existing trends continue, global temperatures will rise between three and ten degrees Celsius by the end of the century, with most computer models pointing toward the upper end of this range. That's energy going into creating even hotter water in the Caribbean and consequently even more powerful storms there. This raises the possibility of a permanent cyclonic band forming over the Southeast US, with hurricanes becoming weekly occurrences from March until November. Not a pleasant thought, to be sure ....




Patent Absurdity

We interrupt our regularly scheduled program for an important note from a commentator to this board:
Microsoft is trying to stop open formats with the accumulation of patents. In New Zealand, Microsoft have lodged the following patent applications:

Patent Number: 525857
Markup language and object model for vector graphics

Patent Number: 535067
Markup language data structure

The applications are in the examination stage. If they're approved we have three months in which to lodge objections. For more information contact the Intellectual Property Office of New Zealand (IPONZ):
http://www.iponz.govt.nz
info (at) iponz.govt.nz
hearings (at) iponz.govt.nz

Microsoft is also out to stop Open Office, as can be seen with this NZ patent application:

Patent Number: 525484
Patent Title: Word-processing document stored in a single XML file that may be manipulated by applications that understand XML
I have railed on the abuse of patents in the past, and had hoped after the debacle with ActiveX components in IE that Microsoft legal would have learned its lessons. However, with this announcement that appears not to be the case.

Let's get this straight. Microsoft was an initial signatory to the W3C SVG working group, where one of the requirements in joining the group was a discovery process indicating that the signatories would disclose any previous patented technologies that may in some way invalidate the claims by the W3C on the technology as a Royalty Free standard. Microsoft was one of two players (and I suspect was a bigger one) in the August 2001 debacle within the W3C that attempted to introduce (unsuccessfully) RAND (Reasonable and Non-Discriminatory) patents into the W3C structure, specifically to cover SVG, patents that would have introduced a license fee into the open standard itself.

Microsoft dropped out of most W3C efforts and started the WS-* standards group, which has to date produced one interesting security related specification and a great deal of otherwise useless crap. It did this in great part not to expedite the W3C standards process (any legitimate standard can take years to reach fruition, something they well know) but to avoid having to contribute to the open standards community and risk losing what they perceive of as valuable intellectual property to their competitors.

As far as the vector standard goes, Microsoft doesn't have much of a leg to stand-on. Adobe has a markup language and object model for vector graphics. It's called Postscript. While not XML, it predates ANY effort by Microsoft by decades, and from personal experience Postscript was in fact being used as an object language as long ago as 1983 (through the agency of Wolfram, the makers of Mathematica). Moreover, even in the realm of vector graphics languages, Adobe had a precursor to XML called PGML (Precision Graphics Markup Language) that predated Microsoft's only legitimate claim on an XML based vector language - VML (Vector Markup Language). Currently, while still available via a download, VML is largely considered now a historical oddity, and has all but disappeared from the Internet.

Microsoft also had another claimant to the vector markup language process to muddy things up - the ill-fated Chromeffects. It was an XML-based initiative to provide a 2D and 3D graphical and multimedia framework into Windows, way back Sound familiar? Can you say XAML, boys and girls? I knew you could. Chromeffects was too ambitious for its time, working far too slowly for anything even remotely resembling a real-time desktop system. Ironically, this may have been the reason for pushing Avalon to 2007, nearly a decade later. It was shelved in November, 1998.

Now here's where things get interesting. One of the requirements of the W3C's Recommendation process is that there must be two working implementations of a given Candidate Recommendation before it can be recognized as a full Recommendation. It's a fairly rigorous standard, and one of the things that it insures is that no one company can submit their technology into the W3C without at least one competitor having a chance at also being able to compete in the marketplace. It's one of those little subtlety things that has kept my admiration for the W3C very high, even when they do stumble.

Chromeffects never worked; it was withdrawn before reaching more than a very preliminary beta. Most patents require a minimum requirement of a demonstrable prototype, to insure that something that the inventor claims should work really does.

I'll not get into issues of fairness or unfairness here - patents in 2004 are intrinsically unfair, as they are typically used by corporations as a way of stifling competitors by unravelling the support skeins that others build their technology on. Even if the patent claims are disproved, the legal resources required to disprove the claim can significantly weaken competitors. My personal take is that patents on IP should be abolished, but that's not going to happen in the current culture of oligarchic fascism.

Open Office Targeted

The claim against Open Office.org (OOo) represents yet another attempt by Microsoft to eliminate by legal means what has proven to be a remarkably embarassing opponent otherwise. The use of the "single XML document" strategy in the OOo patent attack shows that Microsoft realizes it's case is extraordinarily weak there as well, perhaps weaker than it is for vector graphics.

Open Office made use of XML in a very innovative way - using XML as the way that information was stored for office applications, chief among them word processing. They chose to go with an array of five XML documents for efficiency - by breaking what could have easily been a single document into five they could maintain several smaller DOMs simultaneously for handling different aspects of an application. I've used OOo XML as the foundation of my own publishing efforts more than once, and the division makes a great deal of sense. They were also innovative in putting this XML together as a bundled package in a gzip format, another widely recognized open standard, making it possible to send related content without the need to encode binary information in XML, something that is both expensive and most cases unnecessary.

Microsoft has been criticized for years about not making their format user-accessible XML, especially after OOo beat them to the punch, and they would still probably not have done so had not Open Office begun to eat significantly into their sales of MS Office. They did have XML encoded within Office for metadata beginning in 2000, but even there it should be noted that this information was fiendishly difficult for users to extract and utilize - it was a marketing point with very little substance, and the company was roundly criticized for marketing their XML support in this manner when in fact they had nothing useful.

Open Office is a nightmare for Microsoft. It has improved dramatically in recent years, to the extent that it has begun to pull ahead of Microsoft in several key areas of development. It has forced Microsoft to drop the price point on their most profitable product several times, and they can't really win against a product that is available for free. The extensible nature of Open Office, while still too complex, is making it more appealing to those wishing to create specialized builds, while its lack of a commercial application has made it the darling of many a national or state government body with constrained resources and high needs. It has even made inroads in the one area where Microsoft thought it never would - the large-volume enterprise customers that Microsoft thought they had locked up after pushing Word Perfect down a hole.

Given all that, a legal attack via patent was pretty much inevitable, but it is also pretty baseless. Using the argument of single document XML belies the point that for XML, the distinction between one document and five interrelated documents is spurious - you can turn five into one by enclosing all of the documents into a single node in an XML tree. Given that XML is, itself, an open standard patented by the W3C, Microsoft's attempts to claim any kind of legal foundation there is laughable.

New Zealand?

The fact that Microsoft is choosing to pursue this in the country of New Zealand is also telling. Microsoft knows that it can't secure these patents in the United States, but if they can get an international patent they can effectively claim that it supercedes the US patent. I would like to think that the New Zealand patent office is able to smell this pile of dung for what it is, but perhaps Microsoft hopes that New Zealanders are technically unsophisticated. I rather doubt they are ... they've been burned by the promises of large corporations bringing "better"-run governments via privatization only to have the CEOs of the same corporations rob them blind, runing what had been a model democracy in the process.

Still, such patents can cause havoc, especially given the resources that a Microsoft can bring to bear once it has such a piece of paper in hand. I will be contacting the New Zealand patent office to offer my comments and would recommend to all my readers that you do the same. Microsoft has the resources to build world-class software applications, but there is a certain innate laziness that the company has in choosing to use legal means to stifle competition; these suits reflect badly upon it, and lay to rest any "outreach" efforts that Microsoft may be attempting to make into the Open Source community. Perhaps if they had to actually compete, Microsoft might finally end up making software that was simultaneously secure, stable and usable.

Kurt Cagle
Metaphorical Web Publishing
http://www.metaphoricalweb.com

September 20, 2004

SVG and the Search for <elegance>

After reading one comment to my previous post I thought it might be worth actually replying to it as part of the "Main Sequence". Much thanks to Mario for asking these.

Once upon a time, in the not so distant past, I considered myself a professional artist. I worked for a computer game company, drawing highly pixelated versions of Vanna White for one PC version of Wheel of Fortune. I worked as a computer graphic illustrator for an agency a few years before that, working on schematic maps and charts for annual reports of some fairly large companies. Thus I've always had a certain basic sensitivity to the needs and problems of artists, though of late my artwork tends more toward science fiction/fantasy than Fortune 500.

Artists, contrary to some popular perceptions, are seldom lazy people. A good artist can spend 50-100 hours working on a piece of work, making seemingly minute changes to a piece of work or spending hours doing nothing but stippling (putting dots of ink onto a page) or other very repetitive tasks. The difficulty for most artists comes from the fact that they become extraordinarily focused on what they are doing to the exclusion of everything else, which means that many of the mundane tasks that most people take for granted simply don't happen except in those rare intervals where the artist can surface. This can be taken by outsiders as being indicative that artists are typically very reserved (they generally are, except among themselves) and occasionally aloof.

In some respects, I could replace the mind-numbing (or soothing, depending upon temperament) process of meticulously creating a drawing with the act of writing quality algorithms, and the description of artist becomes one of programmer, or more properly, the aesthetic programmer.

There are a number of programmers who are more interested in the result than the process - these are programming engineers, who often tend to deviate very little from the established procedures for performing some action, and who as a consequence find the creative aspects of both art and programming subtly unsettling.

The other kind of programmer, the aesthetic-programmer, may not be the best programmers for putting together a human resources database system, but they are the ones that ultimately end up driving most of the new innovations in the field. I believe that this is because what they are seeking is that elusive quality called elegance.

Elegance is a difficult concept to explain, even to other programmers, because it is in effect a measure of the aesthetic qualities of a piece of code. Certain scientists understand this quality instinctively - high energy physicists for instance, who tend to operate on an extremely abstract level, and I think that one cannot be a mathematician without a desire to seek this elusive property. Einstein understood this with his elegant statement E = mc2, and his more elegant though much less well known Gmn = 8pTmn.

For an artist, elegance can be thought of as the perfect balance of color, composition, lighting, symbolism and realism. For a poet or writer, elegance is prose or poetry that has the right number of words, for the composer the right number of notes. Even for a master, elegance is elusive, a state that can only be achieved infrequently, but what gets created in the process often ends up setting the standards for what we consider the foundations of both art and science.

So where does SVG fit into all of this? Or put another way, where's the tag within SVG? It's not a part of the 1.0 or 1.1 draft, I know that for certain, and it's very likely that it will not manage to make its way into SVG 1.2 either. Okay, I'm being a little frivolous here, but there is a solid reason for me asking such a question. SVG is something of a platypus, ornithorincus anatinus (the name of which I remember, curiously enough, from a Mr. Roger's Neighborhood song). It is a graphics format. It is an animation format. It is an interactive GUI format. It is a DOM for performing integrated web services. It's becoming a publishing format. Like the duckbill platypus, it seems like it was stuck in some kind of bizarre transmogrifier ray, a la Vincent Price's The Fly, neither bird nor mammal but somewhere in between.

There's never really been anything like it, to be perfectly honest. Flash often comes to mind as the point of comparison, but in reality, Flash lacks the capabilities for abstraction that are intrinsic to SVG. Don't get me wrong on this - Flash is a very powerful tool for creating impressive looking graphic animations. The difference between Flash and SVG, however, is that Flash is a self-contained world; SVG on the other hand is beginning to shape up into an application that entwines itself within other specifications.

This will become more obvious when SVG moves more into the native space of browsers and operating systems, rather than being a plug-in. The significance of the Mozilla SVG effort, even at its current nascent stage, is that you can create interactive and animated graphics inline to other markup such as XHTML or XUL. This means, among other things, that the graphics on a page are immediately accessible as part of the DOM, are integrated into the whole fabric of a web page both programmatically and visually.

This DOM interaction has a lot of import. With Flash, it takes a certain amount of skill to write interfaces into the rest of the browser, and those interfaces are perforce very limited; in general, it is easier there to put most if not all of the interactive capability for the "page" in the Flash, bypassing the web page completely. Beyond such issues as text search or retrieving of meta-data for search engines, this process means that Flash ends up creating its own "browser" environment that can render a site useless to someone without the Flash plug-in (ask anyone on a typical Linux machine until recently what that means).

Native SVG support, which is on-going, changes that dynamic. Graphics can be changed at multiple levels of abstraction, and can produce a W3C consistent notion of the concept of level of detail (LOD) - initially a 3D concept where distant objects are rendered at the lowest possible resolution until a certain threshhold distance is reached, at which point a higher polygon mesh and higher res textures are used to render the object more realistically. LOD in a 2D sense means that information is only provided once the attention is focused on a particular area in a graphic, and hidden once that area is no longer the focus.

Graphic libraries can be easily created and integrated with SVG - not static graphics but dynamic ones generated from live data streams. Mapmakers have already discovered this use of SVG, but so far artists generally haven't. The distinction between graphic and multimedia presentation blurs as a consequence of this one facet of the language, as does the distinction between presentation and application. An SVG illustration, embedded within a web page (or a PDF), could read the story around it, could change from visit to visit, could customize itself to its audience. Moreover, it could leap out of its apparent visual boundaries and become an inhabitant upon the page itself, participating not only in the telling of the story, but in the reader's interaction with it.

Lest someone bring up to me the woeful tales of Bob and Clippy, the autonomous agents of Microsoft fame reviled pretty much universally, I'd content that the problem of such agents lay less in their concept and more in their focus. An agent is an extension of the reader, not of the system. As such, an agent is more like a pet, a playful kitten perhaps, than a majordomo - it will play in the corner of the screen, chase after screens, perhaps piddle on the desktop, but what it won't do is set expectations on the user by "waiting" on them. It's that elegance thing again - effective design means understanding the psychological driving factors of the people who use the technology, means putting in the extra effort to imbue agents with a sense of awareness.

The reason I'm focusing on agents here is precisely because all intelligent graphics ultimately are agents of one sort or another. They interact with the user, provide some sense of awareness with their surroundings, and can have memory. Sometimes they can be abstractions of living beings, but these are simply the most ostentatious of such graphics; one of the failures of Bob was the inability to recognize the universality of agents, concentrating upon the "reality" of the agent rather than upon the notion of intelligent graphics.

The key to elegance is to understand that everything has its place, its modality of expression, and the most elegant solutions are those in which the medium is utilized to its best effect. An SVG artist is creating intelligent graphics, autonomous, semi-aware agents, and the best artists in that medium will create art like no other, for it will be art that speaks to you, perhaps quite literally ... and if that's not the quintessence of elegance, I don't know what is.

-- Kurt Cagle


September 19, 2004

SVG - Are We There Yet?

Anyone who has kids knows this particular plaint coming from the backseat of the car, usually not longer after starting down the road toward some favorite vacation spot. I was thinking about this today in light of the recent SVG conference, and some post-conference discussions I've had with a few of the other attendees via e-mail.

At what point do you throw in the towel with respect to an Open Standards technology such as SVG? At what stage in a technology's development do you question whether it has failed to reach some minimal threshhold of survivability, and will only end up being a shadow standard, one that may exist on paper but that couldn't survive out in the wild.

Such questions have more than academic interest. SVG is potentially a very useful technology ... one that is already beginning to prove itself in niche areas such as GIS, where the advantages of semantically rich graphics are most immediate obvious. Outside of this small domain, however, the advantages are nowhere near as clear-cut, where there are commercial companies with very well-entrenched proprietary solutions, and where there are few tools of the calibur of an Illustrator or Flash to entice graphical artists. Explaining the benefits of XML to a tech savvy crowd of developers is a fairly easy proposition ... explaining it to graphic artists who can already do everything that the technology offers via other means is a much more difficult one.

If a technology isn't sufficiently viable, if it doesn't hit that critical mass necessary for "fusion" to occur, then it will be like a brown dwarf, a protostar that is too small to sustain its interior fires. In order for that critical mass to happen, in the case of SVG, there are several different pieces that will need to all fall into place:

  1. Developer Base. If there aren't enough developers working with the technology, then the quality of the applications built with it will almost always be far below that provided by a commercial vendor's offerings, and the breadth of those applications will be limited. The SVG developer base is not yet huge, but it IS growing at a pretty good clip.
  2. Sponsor.In the Open Standards arena, there is typically a sponsor organization pushing the technology. In some cases, such as IBM's release of code for Eclipse, this champion is a large vendor interested in promoting open source solutions. In other cases, such as the Apache Software Foundation, it is a large umbrella group of technology providers that serve to promote the critical standards. I'd have to say that Apache has a few champions in different spaces - the aforementioned Apache with the Batik project (though that program has slowed in recent years), Adobe (which has had an on-again, off-again feeling for SVG), and of late the Novell/IBM access, which has been quietly funding the SVG development in Linux.
  3. Synergy. An open standard typically does not exist by itself. Instead, it will often gain much needed value by coming along at the same time as technologies for which the standard lends itself particular well. The "LAMP" suite -- Linux, Apache, mySQL and Perl (or increasingly PHP) -- represent such a synergy. SVG is beginning to really take off in the wireless space, with the language especially well suited for JIT transmission of graphical content in a small, easily modifiable payload, yet another such synergy that I suspect will be increasingly important over the next couple of years.
  4. Modularity. The degree to which a technology is "plug-and-play" can significantly affect its degree of adoption. In some respects this is a strength for the W3C in general, in others its a weakness. SVG in particular has some problems with modularity -- it can in theory be integrated in with XHTML and work with XForms, but the practice is usually not as clean as anyone would like. On the flipside, while SVG had adopted the W3C DOM, it's not completely consistent with its use of SVG 3.0 standards, and there are questions in the sXBL space about the support for certain other W3C standards such as XPath. The temptation to roll your own API is always there, of course, as a generic API specifically adds a layer of abstraction complexity that more dedicated APIs don't have to deal with, but a well-designed general API also usually is less susceptible to feature creep and encrustation that more specialized APIs tend to encounter on an all too frequent basis.
  5. Evangelists. Guy Kawasaki was responsible for bringing this term into the programming lexicon, but the idea has been around for a while. Sponsors are usually capital backers, the ones to provide the funding to keep the core developers in pizza. Evangelists, on the other hand, are people who live, eat and breathe the technology, who serve to champion it to other developers, corporate managers, and the public at large. Evangelists fervently believe in the technology, in what can be done with it, and typically are not directly associated with the sponsor of the technology (though evangelists often eventually get picked up by sponsors for doing things like standards efforts). I'd consider myself an evangelist for SVG, along with people like Don Demsiak (Don XML), Ronan Oger, Michael Bolger, Peter Schonefeld, Antoine Quint, Michael Bierman, Philip Mansfield, Jon Ferraiolo and others likely to be found on the SVG-Developers list.
  6. Exposure. Evangelists are important because they act as intermediaries between the core development community and the media (both IT and otherwise). Right now, SVG is a barely-kept secret. GIS is adopting it in a big way, it's beginning to make waves in wireless media, but SVG is still at the point where most people require the acronym to be spelled out in order to know what it refers to. However, that is showing signs of change, due in part to the efforts of evangelists to educate the larger IT community, in part because there is enough saturation of products with SVG to make it a strong marketing point. When you see stories about SVG in Time Magazine, you'll know that it has become mainstream, and you better have that IPO ready.
SVG has the potential to have a huge effect on all aspects of the web, something I think many in the SVG community already know. However, to make SVG adoption happen faster, there are several things that you and others can do:
  1. Become an Evangelist. If you are a consultant, pitch SVG solutions to your clients for intranet applications. If you are a full-time programmer, look for places where SVG can be used in your company's applications, either for core functionality or eye candy. If you are a manager or marketer, look at what advantages SVG can buy you, and become a champion of the technology in your own company. SVG is maturing enough now that you CAN build such solutions into your own applications, though it can take some work.
  2. SVG Web. SVG on the web is going to take a while to become solid, in part because of the whole issue of plug-ins and varying degrees of support. If you can, build SVG sections on your websites, clearly labeled for what they are. SVG is not quite ready yet for core functionality within public sites, but it can be an interesting diversion for those sites that do support it.
  3. Contribute. I'm going out on a limb to say that I think Mozilla Firefox is going to seriously erode Microsoft Internet Explorer's base, especially as Redmond doesn't seem to have anything in the works to shore up that side of things. The Mozilla SVG support is sporadic right now, and can use an infusion of developers to pick things up, but its solid enough that this isn't the massive effort it could be. Think about the advantage of native support of SVG within a web browser. If you are in the Linux world, help with the KDE and Gnome SVG efforts, for much the same reason - SVG support built into the operating system is a huge advantage, and additionally gives you a track for getting SVG onto the Macintosh as a native API.
  4. Create Art. If you are an artist, a graphic designer, animator, or in similar field, use tools such as Adobe Illustrator, Inkscape, Sodipodi, or any other of dozens of such editing tools to create good quality SVG artwork, and get it in front of people, along with instructions for being able to load it in appropriate viewers. Artists often find SVG to be intriguing once they figure out the potential for what it can do, but all too many are simply not aware of SVG in the first place. I hope to be able to set up an SVG gallery soon as part of this process as well.
  5. Talk to Vendors. Software companies respond to customers and to developers. The more that they hear about a given technology demand from their client, the more they are likely to add in support for that technology if a decent business case can be made. Microsoft is probably a waste of time, but Macromedia's halting first steps in that direction at least indicate that they are cognizant of the market, and Adobe's SVG efforts (while maddeningly secretive) seem to be gathering steam in response to the technology.
Has SVG reached critical mass yet? Perhaps. The tipping point is usually only obvious in retrospect, where the second order derivative begins to contribute in a meaningful way to the adoption of that technology. My gut feeling is that we're close to being there, if not slightly past it, that the pieces are all beginning to line up, and that the growth in SVG related tech is going to be explosive in 2005, but that's just a gut check, and I'm still pulling together the imperative evidence for it. I'm lining up my SVG ducks in a row right now, and would recommend that you do the same.

Until then, remember to stop for potty breaks. It keeps the kids from driving you crazy ...

Kurt Cagle


September 17, 2004

Thoughts for a Rainy Friday

Different people have different temperaments, of course, not to mention different temperates. What is the ideal weather for one person - blue skies, mid-80s, no humidity or rain in the forecast anywhere - can be downright depressing for those who like gray skies with a light rain and the temperature cool enough to wear a sweater. The former, the Summer People, are outgoing and gregarious, the kind that keep LL Bean in business. The latter, the Autumn People, seem to make up the bulk of programmers and writers, people who would far rather sit in a coffeeshop, typing away and watching the rain.

I am an Autumn Person. The rains have begun here in Seattle, and I am in my element.

I've been spending more time coding an XML editor within Firefox. I covered the application in some detail a few blogs back (The Contender) but wanted to add a couple quick comments in light of Mozilla's 1.0 Release Candidate for Firefox. It's going to be really, really hard for me to go back to Internet Explorer.

In building this app, I've discovered a set of development tools, the Web Developer extensions, that lets me do everything from debugging Javascript code to properly positioning elements to clearing the cache, all from the toolbars or context menus. The Script Editor (codenamed Venkman, presumably after the lead Ghostbuster) is perhaps not quite as polished as IE's, but it comes close, and it works quite effectively at being able to trace scripts as they jump across multiple windows. As a programmer who spends a lot of time in web GUI development, I often have had to deal with second class tools to diagnose application problems, and for the first time in a long while I'm beginning to feel like I have tools to really do my job.

The extensions for Firefox, written in XUL, XBL and XHTML (what I'll call X*L for brevity's sake) are typically third party tools, produced by the development community, that significantly expend the browser while still maintaining a secure architecture. The difference between these tools and ActiveX controls comes down to the degree to which the browser itself exposes its functionality. Firefox is essentially made up of XUL components, and the extensions have a much deeper layer of access into the inner workings of the browser as a consequence. However, there has been a very conscientious decision to set security restrictions at all levels, sometimes annoying as a developer but comforting from an end-user's standpoint.

I made the jump to Firefox to provide an alternative browser support for Mac users of the Content Management System I'm developing, yet I've found that internally there is a growing enthusiasm for the browser. One Mac user, a die-hard Safari fan, was blown away at how much she liked Firefox, and how it extended what she had become used to with the native Safari browser. The other IE-based users are becoming just as enthusiastic; they like the degree of functional customization that can be done with it, and the fact that it is faster than Explorer, and those of us who are Linux afficianados at the company are jazzed that we can develop for Firefox on a Linux platform and know that it will work regardless of platform; this means that for the first time, the Linux users don't have to keep an extra PC around just to use the CMS, something that WAS the case with the older version.

Microsoft may have made a serious miscalculation ignoring the browser market. Until recently, they had a lock on robust clients for intranet applications, and I suspect they were hoping to squeeze those clients back into the WinForms space and away from the loss leader browsers. Given the sophistication of Firefox even now, as well as what's slated for it in the near future, what may happen instead is a migration away from Microsoft-dependent solutions in general to support a more diverse pool of operating systems, with Firefox as one of the key tools in supporting enterprise wide applications. Far from watching the browser market collapse, the program managers at Microsoft may find very soon that their solutions are considered too restrictive in a world where the non-IE browser is becoming more robust. They ignore this at their peril.

I hope to have more to say on an alternative thread, about the future of e-mail, in my next post.

-- Kurt Cagle

September 16, 2004

Refreshing Atom

Beginning to get the hang of this blog thing, and came to the realization that the last format that I was using didn't really lend itself all that terribly well to the message at hand. I have changed the background template (using a fairly Byzantine quasi-xml language that has left me scratching my head more than a little), and as such it should make it at least a little easier to read the page. I've also made the Atom Feed a little more obvious (it's the big blue XML button on the left hand side).

Atom's an interesting standard, and with any luck it should lay to rest the long, controversial process of trying to standardize RSS in favor of something more reasonable. At the recent SVG conference, I brought up a point that I think gets lost. I consider Atom to be a core standard, even though it isn't currently within the W3C rubrick. What Atom (and RSS) does is to provide a mechanism for syndication, and this mechanism may be one of the most important parts of the human/computer web for the next several decades.

So why should the latest adventures of Blog and Blogette be so important to the grand schemes of the Semantic Web? One of the problems that has emerged in the last few years is that there is a limit to the degree of addressability of the web. Even with the now IPO blessed Google, most of the relevant web is remarkably difficult to get to. This becomes especially true in light of blogs, which are, by their very nature, time dependent. You cannot rely upon the passive process of spiders to eventually get your content to others, as the interval between your posting and the spider's finding of same may be measured in months, or in some cases years.

The solution, of course, is to do what magazine and newspaper publishers have done for years - you create a subscription service. Once someone loads in your subscription packet (the Atom XML file) then their news readers will be able to periodically check to see if any new content has come from that service, and will notify the readers at that point (often with synopses of relevant content). It is, truth be told, one of the few remainders from the "Push Computing" philosophy in the mid-1990s that otherwise failed so spectacularly, though I have long felt that the failure came about because the initial "push" was to move advertising to users' desktops.

This in turn brings up two items of note. The first is that bad technology, often driven by the deliberate pursuit of money, usually fades away pretty quickly, but the ideas are often recycled into applications that enable better (and typically freer) communication. The irony here is that such methods often end up supporting (or piggy-backing) revenue generators that make more sense within the constraints of the technology. People typically do not mind ads when they exist in a symbiotic relationship with the relevant content ... it's when they become intrusive and disruptive that people's ire rises.

The seoncd point is that such syndication is readily providing a means by which abstracts of content can be transmitted in a portable, relational manner. The abstraction of articles is one of the more challenging issues in building the contextual web, as abstraction is usually not something that can be easily automated. Someone has to put together these summaries, has to determine what is and is not relevant, and someone needs to bundle abstracts with the relational links to other content.

If you think of Atom as basically being a bundled collection of links tied together under some "editorial" guiding principal, with some measure of the relevancy of the links being made by such determining factors as date, keywords, proximity and perhaps more definite metadata (such as RDF or Dublin Core vocabularies) , an Atom document provides a very different tool for understanding the inner-workings of the web.

For more information on Atom, check the Atom Wiki. In addition to providing the Atom XML specification, it also includes a formal API in the mold of the XML DOM API for working with Atom content. While it may be a while before it completely replaces RSS (in its many manifold expressions) Atom seems to be fairly quickly becoming the dominant syndication language for the web.

I'm putting together my next technology reviews page, and should have it up by Friday. Until then, enjoy!


September 14, 2004

Peak Oil, Corporate Malfeasance and the Rise of the Millennial Generation

For those of you who have been reading Metaphorical Web for a while, you'll know that occasionally I will branch out of the space of XML and into bigger issues of "society watching". These incorporate a lot of my own political beliefs, which tend to be in an gray zone between liberal and libertarian, and thus if you find such political discourse inappropriate, I ask that you simply skip this blog and wait for my more technical next one.

While there is a certain degree of controversy about this, one of the topics that has begun to percolate to the surface of awareness for a large number of people is the idea of Peak Oil. This concept, in its simplest form, is that after nearly a century of pumping oil wells and discovery, we apparently seem to have identified all of the available fields of any quality. With satellite mapping, geo-magnetic and gravitational analyses, it is a fairly easy proposition to look down on this blue ball of ours and determine with a remarkably degree of accuracy, where most of these pockets of crude are located, a capability that was useful a few decades ago, but which has also pretty much disspelled the notion that we will find another Saudi Arabia or Iraq anywhere else on the planet, with the possible exception of Antarctica, a place so hostile and ecologically fragile that the costs involved in raiding it (currently) make it infeasible to approach.

If you buy the common argument that oil comes from the remains of Jurassic era plants, then we are rapidly using up that resource, with nothing left. Even if you believe (as I do) that oil is much more likely the product of methane producing anaerobic bacteria (methanogens) the "recharge" factor implies that it may very well be another ten to fifteen thousand years or more before we're able to refill the tank. As all of human civilization is only about that old, this implies that there will be a LONG time before we have another oil era.

There are alternatives to many of the contemporary uses of oil, though most are (currently) more expensive than a barrel of crude. Oil reclamation from food wastes can provide a pretty good grade of crude, though the distillation process requires a fair amount of input, and is still at the experimental stage. This will provide oil for limited uses, such as certain forms of plastics production, but  it will not in general produce anywhere near enough oil to power our automatic infrastructure. Solar strips are becoming inexpensive enough (and powerful enough) to handle the powering of laptops and other devices, and could even replace our home and office heating requirements within the course of a couple of decades, but again there are problems with being able to unite this into a larger distribution grid, with very real physical limitations keeping the utility of this technology from being viable. Windmills can generate some electricity, but windmill farms usually work best only in windy, wide open spaces, with a fair amount of this energy lost in transmission to the places it gets used (it also seems to have very adverse effects on birds' navigational systems - a windmill is a rotating dynamo generating twisting magnetic fields, which many birds are very sensitive to, causing them to fly into the group near such stations).

In other words, while these "alternative" forms of energy could take up the slack in a localized region, they will likely never be adequate to generate enough power for the voracious US (and increasingly global) economy. This means that, even with the advent of nuclear energy (with its own attendant headaches, as Three Mile Island and the Nevada Yucca Mountain debacle so clearly illustrate), we are looking at oil becoming ever more dear just as the engine is roaring into high gear. A US economy with expensive oil means that air travel becomes prohibitively expensive (and hence locked into a death spiral), means that the cost of all manufactured goods goes up, and ultimately sounds the death knell of contemporary society. This means that it is acceptable to justify the invasion of a foreign country in the name of securing those oil resources ... at least, this is the latest incarnation of the argument used by the Neo-Conservatives currently in power in the administration. And in some respects, it WILL be the death knell of the civilization that we have become comfortable with, though there is definitely life after death here, a new society that may be better or worse, but certainly will be fairly radically different. These changes will occur, though perhaps not as I've outlined here, because there are very fixed physical resource constraints that we are facing now, constraints that will force change in society  whether we want to or not.

The way that we work is already changing. Relatively inexpensive laptops and wireless computers provide the portals by which we connect to the tasks of work - I can sit in a Starbucks and build programs or analyses, communicating with the business server to coordinate my actions with co-workers or teammates, using IM to pass "watercooler" type information. This is true of anyone who works with information ... there is a definite work ritual - getting up early to fight traffic, going into a cubicle in an office park to type information into a box, spending time in meetings trying to get other people up to speed with a presentation, making the commute back home, then collapsing in front of the TV, exhausted but unexercised. The problem is that each stage of that ritual represents some (and in many cases some significant) energy costs. I spend $50 a week on gasoline commuting back and forth. The real estate, though not yet really recovered after the last economic downturn, is still comparatively expensive compared to a couple of decades ago, as is the power costs in maintaining energy swilling swervers and desktop systems, though this latter is dropping in the face of new technologies. Right now the advantages outweight the costs - it is easier to run a business when people are in general proximity - but these costs are mounting all the time, especially when this is tied into the other major cost increase of today: health care.

At the moment we're in a period of calm, though it's more like the eye of a hurricane than long term prosperity. In a normal business cycle, we should be in period of accelerative growth, unemployment (even by the very narrow and stilted definition used by the current administration) should be around 3%, and the economy should be purring along. Instead, airlines and theme parks are laying off thousands, mortgage companies that were in a hiring frenzy a couple of years ago are now going bankrupt, hiring is not even at the population replacement level, and radio and TV is filled with ads for "Debt Counselling Services". If you work full time, you are being "asked" to start sharing some of the health care costs. If you are a contractor, even one working full-time with an agency, you are likely paying for your own insurance completely, or doing without. Don't even bother asking if you're unemployed. In other words, even in an economy that by all rights should be operating at full steam, there are huge headwinds pushing back, and they are only getting stronger.

In Japan this week, I spent some time talking with a graphics professor. Much of his "teaching"  work now involves dealing with students that he has never personally meet, taking distance-learning courses to mitigate the high current infrastructure costs of education. That he now can do so easily is an indication of a sea change that is going on in that sector. The outsourcing (whether "local" or overseas) of jobs is also in great part a consequence of the decreasing costs of such virtual work environments (though greed on the part of CEOs hoping to maximize their own paychecks at the expense of the employees plays a big part as well). Such outsourcing is simply an extension of the trend started back in the early 1990s of trying to eliminate as many full time jobs in favor of less expensive, just-in-time contractors who could be hired for an immediate job then "released" once the job was completed. The burdens that such business had previously shouldered, from health care to pensions and even equipment costs were passed on to the contractor, who typically did not have the collective advantages that corporations had to spread the costs (and risks) among a large pool of people. Prior to the advent of virtualization, doing this was not cost effective - the workers were not effective enough to justify the costs of working with them in a contract role. That's definitely no longer the case.

In the short term, this is the kind of situation that a CEO has got to love. In the long term, it may be their undoing.

American manufacturing is dying. The recent uptick in factory activity is driven in part by a major reclassification of what constitutes factories (pushing many McDonalds and Starbucks into that category, for instance), and in part by a weakening dollar that has made certain American goods cheaper compared to their equivalents elsewhere. However, an increasingly global anti-American sentiment, coupled with rising oil prices, and a system where most of the component pieces for "American made" products come from China, has meant that this bounce is nowhere near as strong as any predicted it would be in any case. Factory utilization is still well below the 100% utilization mark, meaning that we still have much more extant capacity before the need to start building new factories or hire new workers kicks in, with automation continuing to reduce the number of those new workers.

The US IT sector is struggling, though I think its on the upside of its own cyclical slump. In a number of areas (as I saw last week) American software is seen as being shoddy, too expensive, and often coming with very restrictive strings attached.  I was also a little dismayed to realize that at an international conference on a major XML standard, the American contingent was practically non-existent. In wireless technology, this country is seen as being about three years behind the curve, and it is slipping behind other countries in terms of the degree of R&D investment and technical education, and the migration out of IT, while not as pronounced as it was, is causing senior-level talent to retire even as  fewer new programmers enter into the field. Meanwhile wages in places like China and India are increasing for those same people, who are increasingly finding it more attractive to stay home and set up their own companies than it is to take the big leap to the US (especially in light of increasingly hostile immigration policies).

So what does this all mean long term? The cost of foreign IT professionals is rising as the population of local IT professionals is dropping pretty dramatically. Ditto other knowledge workers/creative types. The incentives to go back into the work-force in a salaried position are becoming minimal at this point: subsistance wages, lack of health care, insane hours, dehumanizing work conditions, the very real risk that money you put into pension funds will find its way into some retiring CEO's golden parachute instead.  The next generation work force is making money buying and selling on eBay, starting Open Source projects that they can then charge consulting fees for, jumping from contract position to contract position on Monster, and pooling money to buy into houses, becoming virtual assistants, becoming small-press publishers or selling prints through online services such as DeviantArt or 3D meshes through Renderosity. What they are not doing, at least not in this country, is becoming any more reliant upon the corporate infrastructure than they have to.

I'm concentrating on these twenty and thirty year olds because they provide a snapshot of our society twenty years from now. Most of them are neither liberal nor conservative - they are socialist (Green) or libertarian, both with an extraordinary distrust of the established order, but holding differences in terms of how goods and services are distributed. They do not get their news from Ted Koppel or the New York Times, but from a couple of dozen primary and hundreds of secondary sources on the Internet. Many express themselves in blog space, and they consequently hold a great deal of contempt for "established" facts, instead building an internal network of the universe where everything is weighed for its likely veracity or utility. They generally have a very lose sense about what constitutes ownership, at least in the virtual world. They are the future, and if I was a CEO, I would be very worried about them, because they will tear corporations apart.

Corporations buy and sell things. They rely upon homogenization of the market to reduce the degree of specialization that they need to undertake to sell the product. They rely upon controlling the flow of information to enforce that homogenization (one of the reasons why the state of broadcast media in this country is in such a deplorable state), and they will attempt to censor anything that contradicts their messages in a negative light. So long as their market is also made up of their employees and they can effectively control those employees by reducing the scope of available alternatives in employment, then they can effectively keep the existing system going.  However, demographics is going to start working against them, as is technology, as are energy and health care costs.

When a job becomes untenable or ceases being "cost-effective", people will start looking for alternatives. The ones least likely to do so are the ones most vested in the existing order (one of the reasons why I do not think the current "accidental" boom in real-estate was all that accidental). The ones most likely to do so are ironically the ones that these companies most want to keep - the ones who will generate some form of IP that can be licenseable. Done on the company dime, these things belong to the company. Done on the programmer's (or author's or writer's) dime (and not thoughtlessly given away) that becomes an asset that they can potentially license themselves. Many of the people in the 1990s didn't understand this, and handed control over multi-million dollar ideas to companies in exchange for a mediocre paycheck and all too often worthless options. The next generation has learned from their forebears; and in general are far more educated about IP Law.

The cost of "doing business" is going up, with too much of that cost borne by the workers in those businesses and too little borne by the "investors". Society adjusts. We're on the edge of an IT shortage, one induced in great part by the heavy-handed tactics of corporate culture in the last few years. Virtualization, high health care costs, rising energy costs due to global demand increases from other "Post-Industrializing" countries and increasingly limited supplies, and rapacious and demeaning business practices have engendered a generation that is preparing for the future by becoming locally self-sufficient, autonomous, connected, and consequently much less dependent upon the corporate (or governmental) infrastructure, or the products and services they provide. They have ridden out their first big Depression, in a way that much of the rest of the country really hasn't yet, and in general are taking those lessons to heart and building on them (and reforming the structures that generated the problems in the first place). Yep, I'd be very worried indeed, if I was a CEO of a large corporation right now.

Okay, enough polemics. I'll get back to code in my next posting.

-- Kurt Cagle

September 10, 2004

Post-Convention Thoughts

The conference is over. The only major thing that I missed out of it was Chris Lilley's Q&A session, Chris of course being the chair of the W3C Graphics group. I was privileged to catch the SVG W3C Working Group this morning, answering a number of questions from the audience about the state of the W3C specification. More about those questions in a moment.

I did want to make an observation that occurred to me in light of our stay in Japan. The people of Japan are remarkably homogenous, a cliche so obvious that I began to wonder some about it. Last night I was in the Ginza district, watching as the Sararimen headed away from their respective places of work when it hit me. If you watched which building they emerged from, a pattern began to emerge. There were subtle variations in the way that the men dressed, and more obvious variations among the women, but in all cases the variations were the same for each buildings inhabitants. This echoed the more obvious uniforms that people in service professions from banking to food service to airline workers seemed to have.

Japan was, of course, famous for its feudal society, a society that theoretically changed to a more democratic form in the 1850s. However, what I noticed was that in many ways Japan has become one of the world's most striking examples of post-modern feudalism. The Shogun of medieval Japan now wears Armani suits and his castle, instead of bearing distinctive colors, now bears logos such as Hitachi, Toshiba, Fujitsu, NEC, Sony, and so forth and tower over the landscape as skyscrapers. Higher level functionaries within each Shogun's court wear suits of a certain cut and style as well, typically of the same type as that of the CEO but never as expensive or elegantly tailored. The mid-level female management (women never get into the highest levels of management, from what I've seen) typically wear skirts of a certain length and color, and the bandeau seems to be pretty much pervasive, serving as much an identifier as any court colors. The lower the social position, the more obvious that the mode of dressing is a uniform. Significantly, once you move away from the work world, the homogeneity in dress goes away, though fashion still holds its own sway.

Lest some think I'm Japan-bashing, I see this phenomenon occuring in many other places in the world (including, distressingly, in the US) -- it's more pronounced and obvious in Japan, but the concept of corporation as feudal overlord does rather explain entirely too many problems in the world.

So what does this have to do with the metaphorical web? Watching the process and interactions of the Working Group Members this morning, I thought about that obvservation, and about the fact that the "voting members" of the working group are all affiliated with some company (I don't believe that Invited Experts can vote, but I may be wrong about this). At one point during the cruise a couple of nights ago, Chris Lilley lamented to me that if he could have any new element at all in SVG, it would be nurbs. While I personally agree - nurbs are remarkably effective at building complex shapes without the requirement of working out complex arcs and segments - this raised the question in my mind about why Lilley, who heads up this organization, couldn't get such an obviously useful shape through the group.

The formal members of any W3C working group represent companies. Some, such as Jon Ferraiolo (?) of Adobe, have an incredible degree of latitude in being able to make decisions in the name of Adobe, while others have almost no real say whatsoever - they are there solely as a voice to object to something which may have a negative impact for a given company. I think that this tends to be one of the reason why it takes so long for any changes into a developing working draft to gain approval - the ramifications of such changes have to be passed up into the corporations, with the attendant issues of politics that his always brings. Of course, it is unlikely that most changes in the specifications require going all the way up to the top, (though I could see certain individual CEOs, such as Bill Gates, being very interested in such details) but the changes nonetheless reflect the reality of our global post-modern feudalism.

Okay, too much of a diversion. What we did glean from the often cryptic mutterings of the diplomatic emmissaries of the standards world:
  1. XPath is making its way into the SVG DOM, modelled after the DOM 3 XPath specifications. I'm doing a happy dance about this one. An XPath implementation ends up eliminating a lot of unnecessary tree-walking code, can provide searches based upon ambiguous intermediate nodes in a transformation, and can do a certain level of text processing. I'd be happier if they were mandating XPath 2, but that will come in time.
  2. Low level sockets are also being built into the SVG DOM, along with a slight variation of the XMLHTTPRequest object. Dean Jackson, who graciously talked with me afterwards on many of these issues, indicated that while he knew there would likely be some pushback ("This doesn't belong in a Graphics Standard," is something that he's heard many times), the ability to more precisely control the bindings for content within a distributed environment more than outweighed its rejection upon categorical grounds.
  3. Arcs are still being debated. This has been one of the most commonly requested features of SVG practically from its inception, and frankly even they acknowledged that the current path arc specification is too complex and difficult to work with properly. There was a tendency to want to place it into the "you can create arc support with sXBL" but I think this approach needs to be approached VERY carefully. You can create circles, rectangles, ellipses, and so forth with path commands as well, but their obvious utility makes them prime candidates for being made as privileged elements. I've wondered more than once if the better solution, rather than creating a privileged "arc" element, would be to add startSweep and endSweep parameters to the circle and ellipse elements (and perhaps on all privileged elements with the exception of <path>. This would essentially draw a path (and associated radial fill) from the startSweep angle to the endSweep angle; the behavior is similar to that which already exists in but it is more intuitive.
  4. Much of the effort within the SVG group is now centered on sXBL, which many see as the infrastructure layer necessary to provide the critical mapping between low level graphic primitives and high level graphical componentes. I've talked at length previously about the debate between the CSS and XPath side, and offered at least one solution to the dilemma. Within Microsoft's XML implementation there is an intriguing concept of being able to assign, when an XML document is created, the inherent selection language used. This is designed in part to provide backward-compatibility support for XML Patterns, their XPath precursor, but the idea has some potential here as well. A selectionLangage attribute on the element could serve as an indicator of the current selection language (with it defaulting to XPath 1.0); this attribute could additionally support a CSS selector system if the underlying environment supports it, or XPath2 when that language becomes supported. I'll be pushing this suggestion back into the SVG circles.
  5. Communication protocols between the W3C and the rest of us were discussed. There is, on the W3C's part a sense that there may be something of a disconnect with the rest of the community (a point on which I agree) and ideas were discussed on how to improve commuinication, include their responding in part to the queries. My own take on this (and something that I think should be discussed for ALL working groups) is the idea of appointing a Public Relations Officer, someone who would act as a point of contact with the W3C, would insure that there was some communication going out the other way and would be able to act as an aggregator for the various websites in conjunction with the web editor. I'd appreciate hearing other ideas from you as well on this - how would you go about improving communication with the W3C, on both ends of the process.
Okay, I'm back stateside, waiting for the flight from San Francisco to Seattle, and from there to take care of the kids from my long-suffering wife. Until later, domo arigato gozaimashita for reading my blog.

-- Kurt Cagle