March 1, 2014

Meta-trends and Mind Palaces

Society has become future oriented. Half a century ago, the future was something that belonged primarily to the domain of a handful of technically oriented writers and thinkers - Asimov, Clarke, Heinlein, Philip K. Dick - and a staple of world fairs, but while there have always been people who have cast the net wide to see what would happen a decade from now, a lifetime from now, a millennium from now, the future generally both seemed incredibly fantastic and, curiously enough, a lot like the present, except with air cars.

The 1970s and 80s saw the rise of the profession of analysts and futurists - professional prognosticators who advised business, government, the military, investors and researchers. Like most professionals, the fields started as more of an art than a science, and generally involved lots of research, figuring out patterns and trends, trying to figure out whether such trends were simply transients or had staying power, then using them to get a better feel for which way we were going.

By the 1990s, we were living the future. The Internet was new and bright and shiny, and suddenly whole industries were being transformed (for better or worse) by this vast wave of interconnectivity and information sharing. This is a pattern that is still playing out, rapidly moving us to a web of things where everything, from phones to cars to glasses to toasters, possesses rudimentary intelligence and in some cases even rudimentary awareness. This massive trend (one of Toffler's megatrends) is in turn fueling similar transformations in energy and transfportation infrastructures, health care and bio science, materials engineering, finance, education and the like.

Yet it's important to understand that change itself is transient. In the wake of such change there is quite frequently a period of consolidation, the recession on the back end of the boom as each disrupted system seeks a new  equilibrium. Sectors that had seemed to be brimming with innovation become moribund and entrenched. Social values become more reactionary and conservative, and in many cases become radicalized.  This doesn't last - in the wake of one of the most profound megatrends (the unholy admixture of high tech and high finance) in the last eighty years, much of the "advanced world" shifted deeply conservative, but there are signs that this period is ending, from the ongoing series of social revolutions that are taking place along the Fertile Crescent to the current struggle for control of the Republican Party in the US. This isn't necessarily a shift in predilection for the moderate status quo parties, but instead may be an indication that there are new political philosophies emerging that are more reflective of the changes in reality, and that these are indeed proving attractive again.

I bring this up to make a distinction. A megatrend can be thought of as a tsunami - it's typically enabled by the earthquakes of technology, and is often tied to one or two domains of innovation. The pattern and characteristics, the drivers of that wave's amplitude and velocity, the chaotic aftermath and rebalancing of the the society wracked by this megatrend ... these are all meta-trends, characteristics of trends that define their nature and, to a great extent, also determine their impact.

I work as an ontologist. This word is not familiar to most people, and indeed, if you asked on the street about what an ontologist does, you'd probably be told that they are doctors who specialize in diseases of the naughty bits. There is, unfortunately, more truth there than I'm comfortable with, if I was completely candid.

An ontologist is someone who builds models. These models generally don't come in kits nor require the use of special adhesives that make you lightheaded with overuse, however. Instead, the models that an ontologist builds are conceptual - what types of things exist within a particular problem domain, and how these things relate to other types of things within that domain.

Perhaps a good example of what an ontology is can be taken from the BBC television show Sherlock. Benedict Cumberbatch's titular character has the obligatory Sherlock scan, seemingly able to look at a strand of hair, a bit of dirt, and a smudge of grease, then from these deduce that a man is a military doctor previously deployed in Afghanistan and a crack sharpshooter. This of course is pure Conan Doyle. Where the current version differs is that Doyle saw these as being deductions - if A then B, if B then C, whereas Cumberbatch's Sherlock instead employs something rather different. He creates in his mind a model, a set of scenarios or conjectures, each of which in turn builds upon a foundation called a Mind Palace. The Mind Palace is the collection of known information and the relationships that exist between them, and the scenarios are then assertions made against this Mind Palace to test these.

In other words, Sherlock is an analyst working with a Mind Palace (an ontology) and conjectures (hypothetical assertions or models) in order to see if the latter are consistent with the former.  Such analysis is never perfect, because no model can perfectly capture all the information about a particular thing, and a good analyst generally understands that you cannot completely eliminate a candidate scenario from discussion (there's a very deep link between ontologies and the Heisenberg uncertainty principle that makes me believe that a mathematical formalism for ontological analysis probably looks a great deal like quantum theory), but what it can do is make it possible to rank scenarios according to potential likelihoods.

It's worth noting even in legal theory, that you can never absolutely determine whether someone performed a critical act. At best, you can get a confession and multiple eye witnesses, but nether of these are fool proof: the person confessing may in fact be protecting someone else, be deluded or even confessing to one crime in order to avoid being convicted for another, and eyewitness almost invariably color their memory with their own belief systems, with their desires to help or hinder, and can even create what didn't happen. Instead, the human element exists primarily to establish a social judgment of the validity of a given model or scenario, and usually to also determine the degree to which, if , a positive conviction is achieved, the perpetrator of that action shall be punished.

I think these same kinds of formalisms can be applied to future analysis. Analysis is not a magical bill - it will not tell the future. What it will do is provide a set of scenarios that explore potential futures, and with this establish an estimate of the likelihood that such a scenario might occur, as well as using these building blocks to step out this process (if this scenario is seen as true, then what scenarios follow from this). The ontology - the Mind Palace - for this then consists of the models that that contain those concepts that are most relevant, along with their respective relationships. These, then, are the meta-trends of Predictive Analytics.

I've created a new group on Linked In called Future Proof to explore meta-trends and the ontology of predictive analytics, though will also continue discussing this and related ideas here on Metaphorical Web.

January 21, 2013

On Generations and Generalities

In the comment stream from recent posts, I've had a couple of people take me to task for making sweeping generalities, and for trying to create broad brush generations that act to unify peoples actions. I'd like to focus on both of these topics here.

In statistics, you have three core concepts that together describe what I'd call the "middle values". One of these is the average - you take a number of samples from a set to determine a given property sum, then you divide it by the number of samples. Averages can actually determine the behavior of a population reasonably well when that population is large enough - relative small outliers tend to get smoothed out. However, for smaller populations (under 800 or so people, it turns out) significant outliers can skew the data significantly. This is why in general why talking about the average (or mean) value, you usually want to give two values - the mean μ and the standard deviation σ. For Gaussian distribution curves, this is a pretty good measure of how widely dispersed the data is around that mean value, and hence the likelihood that the mean provides a representative value for the property being measured.

The median value indicates the value that, for a population is such that there as many (give or take one) values above that median as there are below it. The value of the median is that it provides a measure of whether the data set is skewed in one or another direction, and hence is not symmetrical. The closer the median is to the mean, the more likely that the distribution is bell shaped, whereas the farther it is, the greater the likelihood that the distribution either has several significant outliers or that there may in fact be more than one distribution peak involved (which in turn usually indicates that the variables involved are in fact hiding two or more distinct properties that each factor into a standard gaussian around different means).

With this elementary bit of statistics aside, I want to focus on two other terms - generalities and stereotypes. A generality is an assumption that for any given property or characteristic, by knowing the value(s) of a subset of a group one can generalize this up to the group as a whole. Most surveys that are done utilize this principle, and so long as you have a Gaussian distribution (not necessarily a given) and a sufficiently large sample (800+ or so), you can ascertain the degree of confidence in making such generalities.

Stereotypes, on the other hand, involve going from applying generalizations to individuals within a given sample. Having determined a generalization, as you then apply it to smaller and smaller groups, you also have to take into account that the probability that a given person within that sample has a given characteristic drops according to a clearly defined relationships with the standard deviation. The lower the standard deviation, the higher the chance an individual in that group has that property, the higher the standard deviation, the lower that chance.

When talking of generations, what you are in fact usually describing is a cohort that has a number of mutually common characteristics above the mean expected value for all individuals. These characteristics usually come about because of shared common experiences, and are usually consequently driven by demographics. As an example of this, consider the US space program and its effects on pedagogy. In 1958, Sputnik was launched. The US space program really started in about 1961, but from the standpoint of public awareness, John Glenn's historical flight in the Mercury program in 1962 was in many respects the start. From 1962 to 1974, then, the space program became a seminal part of public education, with the high point being the landing on the Moon in 1969. By 1974, the impact of the moon missions had dropped as other factors, including the outcry over the Vietnam War and the scandals of Watergate, as well as just general fatigue, made the moon missions less and less effective from about 1971 onward.

From the standpoint of grade school and high school kids, this had a huge impact, probably more than it did for any other group, in great part because the school curricula was built around those programs. This group (for people born from about 1955 to 1972 or so) had a higher proportion per capita of engineers, scientists and researchers graduate from college than at any time before or since. By 1977, when the youngest of this group was entering into grade school, this big STEM push was fading, in part because a conservative agenda was reasserting itself in the school system and there were fewer "gee-whiz" engineering innovations happening to drive the push.

Significantly, 1962 marked another significant point. The number of babies born had risen steadily from about 1943 onward as economic conditions improved and soldiers returned from World War II with significant disposable income saved up. This peaked in 1955, and then started to drop as the war generation reached forty and menopause began to set in. It also dropped more precipitously because of the advent of the first birth control pill in 1960, to the extent that by 1973 there were only 75% as many childrern born as at the peak.

Most demographers tend to measure populations from the zero-point between a peak or trough (or when the population Gaussian reaches a certain number of standard deviations from the peak.) However, sociologically, a better measure of a population cohort may be that population at either a peak or a trough + five years. In general, those born before a peak will have more of societies resources available to them - attention, money, policy, etc. - while those after the peak will see those aspects diminish. The five years has to do with the fact that until Kindergarten, the impact of the larger world on a child is fairly minimal - babies and toddlers will behave the same regardless of when they are born and the way that we treat them will tend to be the same as well.

However, by age five, the children are entering into universal education, which means that for the next ten to sixteen years, there is a homogenization effect - all kids will tend to experience the same culture present at any given time, and will correspondingly be shaped by that culture. They will watch the same shows, hear the same issues being discussed (albeit from potentially different viewpoints) at the dinner table, will wear roughly the same fashions and will be affected by the same educational indoctrination). This has a huge impact upon future development of people, because during their formative years they have a common cultural reservoir from which to draw.

Now, obviously, no one is going to grow up to be a clone of everyone else. Gender, regional differences, growing up in urban vs. rural vs. suburban households, ethnicity, family history and personal temperament will all play a significant role as well. However, if you take the population as an aggregate at any point in time and look at the population of a given age at that time, generational deviance from the norm for specific characteristics will be well above the long term aggregate for that age, and those deviations are more pronounced for distinct cohorts of ages.

Births per capita troughed in 1973, and peaked in 1990, but bucked the pattern of previous generations by troughing in 2002 at very nearly the peak value and then rising again until 2008, at which point the birth population began to drop precipitously through to this year. This indicates that birth rates are not purely cyclical, but are affected by societal changes (long term declines in earning power meant ered) were marrying later, and consequently having both fewer kids and skewing the curves from previous generations) and by economic ones (the birth rate was trending upward in 2007 then dropped sharply by 2008 as the global economy sputtered). With the advent of oral contraceptives fifty years ago, the ability of women to choose when they get pregnant is significantly enhanced, and this no doubt will continue to alter what had been fairly distinct generational patterns before (BPC was nowhere near as sensitive to economic conditions before the Pill).

My goal with these essays is not to create stereotypes, but rather, from a sea of data, whether there are consistent patterns that emerge about the evolution of society. I see the Millennials - those born between 1980 and 2000 as in general being very connected and artistically inclined, because this was the first cohort to have digitalized media and the tools to use them from the time they were in kindergarten. This will make them very different from those born prior to 1980, who largely wrote the tools. However, this does not mean that a person picked at random from this group will be permanently connected to their iPads and music players - only that statistically it is more likely that they will. In effect, this is an attempt to identify and model the various cohorts, in order to better understand how society will change as they enter different phases of their life as a group.

January 7, 2013

Semantic Modeling and Related Topics

A short post - when I'm not throwing firebombs at big social institutions, I do a lot of work in data modeling, XML and semantics. If you have come to looking for more technically oriented content, please check out my new site at Semantic Modeling.

January 6, 2013

The Mercantilist and the Engineer

In 1959, author and journalist Vance Packard wrote about the class structures inherent in the US in The Status Seekers. While his work appears fairly dated today, his basic premise, that America has always been a stratified society with distinct "classes" even as it espoused egalitarianism. I first encountered his work in high school in the mid-1970s, but even then, while I thought there was a great deal interesting in the work, I also thought he missed something critical. After thirty five years, I have a pretty good idea what it was.

Packard broke down society into nine classes in a pyramid, ranging from lower lower class (the destitute) to upper upper class (the ultra-wealthy). In his time the upper lower case consisted of the trades or blue collar workers, with the lower middle class being the lowest level of managers, and small independent service oriented business owners, the middle middle class being the layer of middle management that was all pervasive in the years after World War II, and the upper middle class being the professionals - lawyers, doctors, accountants and so forth. the lower upper class in turn consisted of the nouveau rich, while the middle upper class was the old money rich, and the upper upper class were in effect the thin strata of ultra-wealthy cloud-dwellers who dominated the world's financial system.

So far, so good, though the description of new wealth vs. old wealth I think hid a deeper truth. Moreover, Packard brushed over a few anomalous classes - the military (which has always had a two tiered class structure independent of the rest of American culture), the academic (which had a similarly distinct system of students, non-tenured professors, tenured professors and department heads and deans), and one final group that he really had trouble with - techno-nerds, which even then didn't seem to fit into the broader picture.

The reason that this last group didn't fit neatly into the equation was that the techno-nerds of the time were simply a manifestation of the engineering class, which has never fit neatly into the hierarchy. Most are well educated, but not academics, often having an ambivalent social standing somewhere between the middle managers and the professionals, but in general not belonging to either. In many respects, this has always been true. The engineering class has, over the years, bounced around. In wartime, it's not at all uncommon to find it residing with the military, which taps it's expertise, even though most engineers find war mystifying ... they see too much potential in human beings to necessarily feel that taking someone else's life is justified simply because they are not us, and in many cases, those engineers were as likely as not corresponding with their counterparts on the other side up until the day that hostilities were declared (and often even beyond that).

In peacetime (and despite waging two global wars until recently, most of the United States is still on a peacetime footing) they tend to get tapped by the upper middle class (which I think is actually part and parcel of the lower upper class) in order to gain ascendancy into the upper middle class, while knocking those in the UMC down a rung or two into the LUC). In effect what you have at play is a perennial struggle between the emergent upper class - the New Mercantilists, vs. the existing upper class - the Old Mercantilists.

In today's terms, mercantilists are investors, financiers, senior (non-technical) managers, account executives, marketing and advertising professionals and others involved in the buying, marketing and selling of goods and services. Engineers, on the other hand, are technical designers and implementers - programmers, architects (both structural and software), scientific researchers, mathematicians, information managers and librarians, industrial and product engineers, as well as most domain analysts.

It's worth noting that this process is recursive. The Old Mercantilists were a previous generation's New Mercantilists who took advantage of the technology (and the technologists) of their time to knock over the then masters of the universe. However, in the process, the old mercantilists also tied themselves to a particular technology, and so long as that technology was not made obsolete, they generally continued to build up their power base. Eventually, perhaps over generations or even dynasties, the balance of power shifted as the innovations of the technologists permeated through society and rendered the technological basis of the old guard obsolete.

Engineers and mercantilists have long had an uneasy relationship. In general, most mercantilists of one era are the beneficiaries of the achievements of engineers of the previous era. Engineers are problem solvers, and given the opportunity to attempt to find the best solution they all too frequently do not take the time to distance themselves from the projects and understand its full business ramifications until some mercantilist, who seldom has the engineer's focus, realizes that it will in fact meet a need he has for making more money.

Having done so, the mercantilist all too often realizes that should the engineer go elsewhere, so too does that exclusivity of knowledge, and so the mercantilist will generally do everything in his power to make sure the engineer stays under his control. In the past, this included killing the engineer if necessary.

Needless to say, engineers have become a little distrustful of mercantilists as a consequence.

It should be noted that politicians and senior managers generally arise from the middle upper class preferentially (as do the most influential military officers and (non-scientific) academics, even today), which usually tends to strongly color their views about social and financial morality. It's noteworthy that the current Congress is still dominated by millionaires, partially because politics is an expensive occupation, but partially because the background of those who run for Congress heavily slants towards those who are second and third generation wealthy; relatively few people who have made their wealth in the most recent technological revolution are now involved in policy setting, simply because the ones who made the wealth generally are too old while their children are not old enough to play in that arena. There are exceptions, such as Maria Cantwell of Washington State, but I suspect that we're really only going to see the scions of the New Wealthy get into politics in any numbers in the next thirty years or so as the GenXers move into the policy arena.

One problem that the engineer faces is that most mercantilists, young or old, are afraid of the engineer. Engineers are problem solvers. Mercantilists are opportunists - they seek problems to exploit, in order to make a profit. So long as the problem exists, they profit by mitigating the effects of it, but if the problem was solved, they would have no market. As such, there is often a tension when mercantilists work with engineers, because the engineer's natural impulse is to solve a problem in as thorough a manner as possible, and the idea of deliberately leaving a problem open (or even deliberately creating them, as mercantilists have been known to do) runs counter to the engineering mindset.

 Moreover, engineers tend to be egalitarian, particularly with other engineers. The open source movement is a prime example of an engineering solution, and even now, mercantilists are struggling with how to keep it under control and not ruining their business models. The transparency in government movement is an engineering solution to solving corruption in government, but politicians prefer opacity because politics is generally about doing a favor for someone in exchange for a favor for you at a crucial time, and transparency radically undermines that. It also makes it far more difficult for people to follow the long, time-honored tradition of going from politics to corporate advocacy to academia back to politics.

Not surprisingly, this often means that engineers and mercantilists speak different languages, because many of their operating assumptions are very different. Engineers are noted for the precision of their language - terms have very clear meanings, and when a term is ambiguous the natural tendency of an engineer is to formally specify a definition to disambiguate it. This precision of language is important, because it enables high throughput communication. It also has the side effect that engineers dislike lying and deliberate vagueness, because it stands in the way of communication.What's more, an engineer is more likely than others to check up on assumptions received from others when he or she is uncertain about its source or veracity, and deliberate falsehood will usually reduce the authority or weight of information from that source.

Engineering communication also involves both a bandwidth check and dominance check. When an experienced engineer communicates with someone else, he is likely to start out with probing questions to determine the level of competence of the other person and then will adjust up or down as appropriate. Competence is a big part of the engineer's stock in trade, so the authority of another person goes down in his mind when the engineer has to throttle his conceptual flow, while if in communication it's revealed that the person being addressed has a higher degree of competence, then that person's authority rises accordingly. Thus, when a new engineer is brought into a group of engineers, one of the first thing that happens is a dominance game occurs, where the new engineer attempts to establish his or her place in the social hierarchy via his competency. This often has only slight correlation with social standing in a corporate hierarchy, for instance - the most competent engineer becomes the guru, and is accorded both the highest degree of respect and to a certain extent the ability to veto a course of action, even if he is not in a position to do so socially.

Mercantilists, on the other hand, use corporate social standing (typically tied to wealth or influence) in order to both establish dominance and to communicate. Language is typically vague and multilayered. A mercantilist is constantly playing poker - attempting either to convince others to buy what they are selling against obvious resistance to do so or attempting to buy what others are selling for the least amount of outlay.  This means that in terms of communication, the mercantilist seeks to be deliberately vague, in order to provide the least amount of information to either their transactional partner or potential competitors for the same resources. This extends beyond simple monetary transactions to personal transactions - information, like everything else, can be traded for gain or loss. The mercantilist is precise only in contracts, and then only to insure that there is nothing within a transaction that can leave them obligated beyond very set terms. Theirs is the language of persuasion, and their metric of success is the degree to which their persuasion has enriched them.

Given these differences, it's perhaps not surprising that there is as much animosity as exists between the two groups. When an engineer is asked to estimate the time it takes to do a task, he or she will treat it as a problem to be solved, and will usually be able to tell you fairly accurately a range of time that such a project will take, given uncertainties for certain tasks. A mercantilist, on the other hand, will hear a time and implicitly assume that it is a commitment, and will usually attempt to minimize that time as much as possible because they are paying by the hour. A mercantilist, on the other hand, will go out of his way to never be put into a position where he is responsible for a given time commitment. Engineers are inclined to share information, because it increases their overall authority. Mercantilists are inclined to hoard information because it decreases their vulnerability and protects their advantages in the marketplace.

Even their social structures are different. Mercantilists gravitate towards hierarchies, because their social position is predicated upon their measurable influence, which can typically be seen by the number of people who work "under" them. In essence, their authority derives from the number of people who report to them, coupled with their success as sales people (either directly as field sales agents or indirectly through the number and effectiveness of the field agents that report to them). Engineers, on the other hand, gravitate towards distributed nodal networks, where you have small clusters or nodes of engineers that work with one another within the context of a larger sea of communication. In this context, an engineer's social standing is based upon their authority - the people they have studied under, the number of works they have authored, the number of papers they have presented, the number of patents they have submitted.

Put another way, for the mercantilist, authority derives from social position, while for the engineer, social position derives from authority. In many respects, this is one reason why engineers and creatives usually find common cause. A creative, whether an author, an actor, an artist, a musician or an athlete, is known primarily for his or her works. It is the strength of their works that establishes their reputation. They can become quite wealthy on the basis of that work, of course, but it is not in general their wealth that determines their social position. Indeed, like engineers, few creatives are ever really welcome in even Nouveau riche circles, and the ones that are in general are there because they have parlayed their wealth into investments and corporate control (and even then they are suspect).

On a final note - the twentieth century has been defined either by the military (which is a strong command and control society that is very hierarchical) or the mercantilist (which is a hierarchical society that tends to venerate those most successful at persuasion and making money). There are indications that the twenty first century will see the twilight of the mercantilist and the rise of the engineer (a move towards decentralized networks and authority as a measure of social status) followed by the rise of a creative class (the Millennials) where authority derives from reputation. This argues that Packard's basic premise was flawed. It may be more appropriate to think of society cycling through different structures with power and influence waxing and waning across each sector over time.

My thanks to Hugh Chatfield for the inspiration for this one. Please see his post on Emily Carr, CNN, Carl Sagan and Bucky Fuller here.

January 2, 2013

The Paradox of the Wage Slave

Once upon a time, there was no such thing as the hourly wage. If you were an independent farmer, you'd sell your grains, cows, pigs and vegetables at the market, and in general would try to stagger these so that you could have money coming in most times of the year. If you were a tenant farmer, when the land lord sold the goods you produced, you'd get a percentage of the sales based upon the amount of land you farmed. A smith would negotiate by the piece or the lot, and usually took a down-payment to cover the costs of the materials. Farmhands and soldiers would be paid a set amount each week, usually at the end of the week after the work in question was done, but might also get a certain proportion of their wages from a share of the harvest or a chance at the spoils. Sailors would get a share of the shipping proceeds (or plunder if the ship in question was a pirate or privateer vessel), plus a stipend for completed voyages and occasionally a small signing bonus.

In general, the per week payments were intended to keep the laborer involved until the final payout - in effect the laborer was part of the venture and would share in the rewards, or was paid per piece with just enough to cover the artisan's or tradesman's costs and basic sustenance paid in advance.

Industrialization changed that, along with the arrival of the mechanical clock. People have always had the ability to tell approximate time via candles or hour glasses, but because such resources were both expensive and required maintenance (and were at best very approximate) most timekeeping was managed by church bells sounding the times of worship. With the advent of the clock, however, it became possible to measure tasks more precisely, and as a consequence to break up time into discrete units during the day.

The machine paradigm also broke the normal agricultural rhythms of working at dawn, getting a big breakfast, working until the sun reached it's peak, taking a short siesta, then working until near dark. Instead, you worked to the clock. In the factory paradigm, it made less sense to pay the workers an small initial payment then pay them a share of the proceeds after the project was done, because there was never a "done" point - the machines ran twelve hours a day, every day. Because industrialization was going on in tandem with the break up of the feudal tenant farm system, there were a lot of laborers available for factory jobs, and consequently, factory owners could limit the laborers to hourly stipends without any hope of final renumeration. This was also the stage where factory labor diverged from trade or artisanal labor, although the former also depressed wages for the latter.

In the 1940s in both the US and England, most able bodied young men went to war, where they learned regimentation, and where both officer and enlisted class became intimately familiar with command and control structures. The military had standardized on hourly wages, but also had standardized on the concept of a standard work week for those not in theater in order to simplify wage accounting. In practice, that meant that you got paid for 48 hours of work a week, period. Senior grades had a higher pay structure per hour, and officers made more than enlisted for the same number of hours of service.

When the war ended, the officers went into the newly booming corporations as managers as they switched over from war time to peacetime production of goods, while the enlisted went into the factories as foremen and line managers. The terms "white" and "blue" collar jobs reflected this - naval daily officer uniforms were white cotton, while the ratings and seamen wore blue chamoise-cloth shirts.

Wages began going up both because of increased demand for skilled workers and because the management class was also getting wages - they were still hirelings of the rentier or investor class, but because they were doing management type activities they typically had far more involvement in the longer term success or failure of the company. Moreover, much of that management was involved with sales, which in addition to wages, paid a commission on sales made that boosted the income of the management class significantly in the years after World War II.

Meanwhile, unions, which had struggled during the Depression and World War II, exploded in popularity in the 1950 and 60s, in part because there was a massive demand for people in the building trades - skilled carpenters, electricians, plumbers, and so forth who had until then perforce taken temporary jobs on an as available basis, and in part in manufacturing, where again high employment demand had meant that a system that both guaranteed competence and provided an environment for younger union members to gain experience made them attractive. As many of the companies involved were comparatively weak, the management of these companies were unable to stop this phenomenon, as they needed people too much not to concede to labor demands.

By the 1970s, labor unions had become very pervasive, and arguably had become too powerful, at least from the perspective of corporations that were now facing increasingly severe headwinds. In the 1950s, the United States was effectively rebuilding both Europe and Asia. By the 1970s, however, these economies had recovered, and were increasingly competing against the United States in critical areas. Additionally, the Breton Woods agreement in 1944 that had established a global reserve currency (the US dollar) and pegged that dollar to gold was seen more and more as a burden by the US, since it meant that US banks were very limited in the amount of money that they could loan out. When French President Charles de Gaulle demanded that the US make payments to France in gold, not dollars (as the French were concerned about the Americans' depreciation of their currency during the 1960s), President Richard Nixon severed the tie between gold and the dollar. This had the immediate effect of causing the oil producing states of the Middle East to band together in order to raise prices in response, which in turn began an inflationary spiral that didn't really end until Federal Reserve Chair Paul Volcker raising interest rates to nose bleed levels,

The massive spike in inflation caused demand for American produced goods to fall dramatically, exacerbating problems that the unions faced. With reduced demand, corporations were able to close plants with impunity. People paid into unions because they had been successful in raising wages and work standards (including reducing total work time to 40 hours per week), but as manufacturing jobs disappeared, so too did the clout of the unions, because there were far more people competing for jobs than there were jobs available. This has always been the Achilles heel of the union movement. Ironically, those places where unions have remained strongest are also those where educational requirements and continued training have also been the most stringent - teachers, nurses, engineers, fire and police professionals,

It's also worth noting the distinctions in types of inflation. Talking about a broad "inflation" rate is misleading, because in general, inflation is the rise of labor or resources relative to the nominal price of other resources. wage inflation occurred in the 1950s and early 60s relative to commodities, energy and finished goods because labor was comparatively scarce for many jobs. Wages largely stagnated since about 1971, but there was massive inflation in managerial salaries and dividends. Energy has inflated relative to wages since '71, while commodities inflated during the period from 1998-2008, and real estate inflated dramatically from about 2000 until the market collapsed in 2008.

Most corporate managers and rentier class investors prefer it when labor costs fall while finished goods inflate (which increases their profit), but fear when labor costs rise and raw material goods inflate (which can often squeeze margins at a time when the economy is tight). Not surprisingly, when the main stream media discusses the desire of the Federal Reserve to increase inflation, what they are usually referring to is the inflation of finished goods (from cars and houses to computers, packaged foods and so forth) rather than wage inflation, even though in this case wage inflation is precisely what needs to happen, relative to other asset classes).

In the late 1970s, a new class of business consultants such as Peter Drucker began making the argument that the primary purpose of a corporation was not to create goods and services but to maximize shareholder value. This credo was part of a shift in thinking pushed largely by the Chicago School of Economics and the monetarists, led by Milton Friedman. Along with this came the belief that the senior management of a corporation, such as the CEO or CFO, should be incentivized to increase stock value (which was widely seen as a good proxy for "shareholder value") by giving them options to purchase stocks at a greatly reduced price.

With skin in the game, these senior managers would then have more reason to keep stock prices up. In point of fact, all that this did was to transfer a significant amount of wealth from the employees (who were not similarly compensated) and the investors to the managerial class. Ironically, this has served in the long term to significantly reduce shareholder value, while at the same time making such manageables largely unaccountable as they ended up stocking boards of directors with their cronies. Weighed down with expensive senior management contracts many companies ended up reducing long term wages on employees that weren't critical to success to compensate - additionally, because stock price became the only real proxy for a corporation's value, corporate raiders emerged who would push the stock value of a company down through market manipulation, buy it out, reward the senior managers and fire the labor force, often gorging on pension funds and patents in the process.

The rise of unemployment that resulted was partially masked by the rise of the IT sector. The information technologies revolution started in the 1970s with big iron systems that began to reduce accounting staffs, but it really was only the marriage of the personal computer in the 1980s with networking technology that things began to change dramatically. One of the first things to happen was that as software reached a critical threshold in the mid 1980s, it began to erode the last real bastion of wage employment - the non-managerial white (and pink) collar jobs that had been indispensible to the command and control corporate structure.

The creation of presentations provides an interesting illustration of the impact this had. Until the mid-1980s, many corporations had graphic design departments. If a manager needed to make a presentation, he would need to work with a designer to design the slides, who would then work with a typesetter, a graphic illustrator and photographer to create the slides, a copy-writer, and possibly a printer, and would often take a month of lead time. With the introduction of presentation software such as Harvard Graphics and later Powerpoint, the manager could do all of these jobs himself, eliminating these positions and drastically reducing the time to do this work. Adaptable artists and designers did eventually go to work for themselves to provide such services, but for every person that became successful in this milleau, three or four did not, and in the process it caused a shift away from the monolithic culture into more of a  freelance and studio arrangement.

Ironically, such a process served to hinder the women's movement for at least a few decades. Falling real wages coincided with a rise of women's empowerment to bring a whole generation of women into the corporate workforce as secretaries, which often provided a stepping stone into mid-level management (typically office management or administration). The introduction of personal computers into the corporate workforce actually initially proved beneficial to secretaries, because they were often the first to get access to these typewriter-like devices and consequently ended up getting a leg up on their male managerial counterparts. However, as more people began using PCs in the work environment, it also radically thinned the number of secretaries required in an organization (although in a fitting twist of irony it also had the same effect on mid-level managers a few years later). This is part of the reason that there's something of a gap between older and younger women in most organizations, especially as IT itself became increasingly seen as a specialized domain for nerdy young men.

For manufacturing, however, the IT revolution was devastating for workers. Once you networked computers, it became possible to distribute your workforce, and from there it was a short step to moving work outside the US in particular to countries with low labor costs, low taxes and lax regulatory regimes. Standardization of shipping containers made shipping raw goods to these external factories for processing and sending the finished goods back easier, and new telecommunication systems meant that it was easier to coordinate production eight to ten hours ahead or behind you globally. This served to inject huge amounts of money into the Asian economies, which had the unintended effect of raising the wage levels of Chinese, Indian, Japanese and Korean workers dramatically. This outsourcing drained manufacturing from the US, leaving much of the Midwest and MidAtlantic as derelict ghost towns.

This also had the effect of reducing the overall import costs of foreign goods, which companies such as Walmart took strong advantage of. The outsourcing on manufacturing not only eliminated manufacturing jobs, but also had an adverse on the many service jobs that supported these manufacturing jobs, driving down wages in these areas and giving rise to the McJob - part time, no benefits, paying minimum wage, offering little opportunity for advancement and making an insufficient amount of money to catch up on with steadily rising food and housing prices. Automation generally affected services economies less directly - services almost by definition require either human intervention or human knowledge - but it did mean that mid-level management jobs (which typically provided a career path for people in these sectors) disappeared, leaving fewer ways for a person to break out of the "wage-slave" trap.

Dramatic rises in energy and commodities due both to scarcity and a growing realization on the part of countries that they were being pillaged by Western corporations caused the machine to falter even more. As the opportunities for the giant petrochemical companies to get access to foreign oil at highly profitable rates disappeared, cries for energy independence began to arise in the US. Energy independence in this context should be read, however, not as an increase in the use alternative energy sources (which currently receive a very small subsidy by the US compared to the oil companies) but as increased drilling for shale oil, offshore oil and natural gas deposits via rock fracturing (aka fracking). These deposits were considered less economical (in part because of the remediation and political costs) than foreign oil and natural gas, but at this stage there are considerably fewer alternatives left to the oil companies (in 1960, oil companies owned roughly 85% of all oil deposits globally, in 2010, that number is closer to 10%, as most of these has been nationalized by their respective governments).

This has led to an increase in the number of hydrocarbon engineering and maintenance jobs in the US, but this is a labor market that runs hot and cold. The jobs will be around until the fields play out, then will be gone - this will likely happen within the next decade.

We are now in what has been described as a bubble economy - government stimulus is frequently needed to create a temporary market, but these markets, unregulated, quickly grow to a point where they are oversupplying the available demand, attracting parasitic speculators that then cause the system to collapse, causing inflation in that sector followed by rapid deflation and despoiled ecospaces. This happened in IT in 2000, in housing in 2008, and in education and energy production likely in the next couple of years. The housing collapse in particular is still playing out, primarily in Europe, though it has left a legal tangle of housing ownership that will take decades to untangle, if ever (I expect that ultimately much of this will end up being written off as uncollectable).

It is against this backdrop that it becomes possible to understand what will happen to jobs over the next couple of decades. There are two additional factors that play into the picture as well. The first is demographic. People born in 1943, which I consider the start of the Baby Boom, turn seventy this year. In the depths of the recession that started in 2008, when this group reached 65, many of them went back to work - and for a while it was not at all uncommon to see a lot of low wage jobs being held by people in their seventh decade. However, even given advancements in geriontology, the ability of people to work into their seventies deterioriates dramatically. The Boomer generation peaked around 1953. If you assume that only a comparatively small fraction of those age 70 or above are still in the workforce, this means that this gray workforce will fade fairly quickly from the overall workforce just in the next five years. This will have the effect of clearing out a large proportion of upper-level management as well, which has been heavily dominated by Boomers just given the sheer number of them.

GenXers are a trough generation - as a group there is perhaps 65% as many of them as there are Boomers. These people are now entering into policy making positions in both government and business, but because of numbers, the Boomer peak for leaving the workforce hits at approximately the bottom of the GenXer trough for entering into senior management and senior professional positions. This actually translates into a relative scarcity of executive and professorial level talent by 2020, now only seven years distant. GenXers, for the most part, are engineers. Many of them, in their 20s through 40s, were responsible for the low level implementation of the web in the 1990s and the 2000s. A large number were contractors, people who generally benefited far less overall monetarily from the emergence of the computing revolution and the web, and as such they see far less benefit in large scale corporate structures.

Indeed, the GenXer view of a company is increasingly becoming the norm. It's typically small - under 150 people, in many cases under twenty people. It's distributed and virtual, with the idea of an "office" as often as not being a periodically rented room in a Starbucks, and with people working on it from literally across the world. Participants are often shareholders without necessarily being employees. Their physical facilities are on the cloud, and staffs are usually two or three people devoted to administration while the rest are "creatives" - engineers, developers, artists, videographers, writers and subject matter experts. The products involved are often either virtual or custom as well, and usually tend to have a comparatively small life cycle - often less than six months. This could be anything from software to customized cars to media productions to baked goods.

In effect these microcompanies are production pods. They may be part of a larger company, but they are typically autonomous even then. They can be seen as "production houses" or similar entities, and they may often perform specialized services for a larger entity - a digital effects house for a movie, a research group for a pharmaceutical company, a local food provider, specialized news journalists. When they do have physical production  facilities, those facilities may be shared with other microcompanies (the facilities themselves are essentially another company).

One of the longer term impacts of ObamaCare is that it also becomes possible for such pods to enter into group arrangements with health insurers, and makes it easier for people to participate in such insurance systems without necessarily being tied to a 40-hour paycheck. Health insurance was once one of the big perks of the more monolithic companies, but until comparatively recently changing companies typically involved changing insurance companies as well, a process that could become onerous and leave people with gaps in insurance that could be devastating if a worker or her child was injured. As command and control companies end up putting more of the costs of insurance on the employee, the benefit to staying with that employer diminishes.

The same thing applies to pension plans - it has become increasingly common for companies to let go of employees that are close to cashing out their pensions for retirement, often leaving them with little to nothing to show for years of saving. The younger generations are increasingly skeptical of large companies to manage their retirement, usually with good reason, especially since the average 40 year old today may have ten or more companies under their belt since they started work, and can expect to work for at least that many more before they reach "retirement age". This means that GenXers and younger (especially younger) are choosing to manage their own retirement funds when possible, independent of their employer.

Once those two "benefits" are taken out of the equation, the only real incentives that companies can offer are ownership stakes and salaries. As mentioned earlier, salaries are attractive primarily because of their regularity - you have a guarantee that you will receive X amount of money on this particular date, which becomes critical for the credit/debit system we currently inhabit. Ownership stakes are riskier, but they constitute a long term royalty, which can be important because it becomes itself a long term semi-reliable revenue stream. If you receive royalties from three or four different companies, this can go a long way to not having to be employed continuously.

The GenXers will consequently be transformers, pragmatists who are more interested in solving problems than dealing with morality, overshadowed by a media that is still primarily fixated on the Boomers, quietly cleaning up the messes, establishing standards, and promoting interconnectivity and transparency. Many of them now are involved in the technical engineering involved in alternative energy and green initiatives, next generation cars, trucks and trains, aerospace technologies, programming, bioengineering, information management and design, and so forth. While they are familiar with corporate culture, they find the political jockeying and socializing of the previous generation tedious, and though they are competent enough managers, GenXers generally tend to be more introverted and less entrepreneurial. Overall, as they get older, GenXers are also far more likely to go solo - consulting or freelancing. They may end up setting up consulting groups in order to take advantage of the benefits of same, but there is usually comparatively little interaction between consultants - they are more likely to be onsite with a client troubleshooting.

From a political strategist standpoint, one of the great mysteries of the modern era has been the disappearance of the unions. Beyond the strong automation factors discussed earlier as well as a politically hostile climate to unions, one factor has always been generational. GenXers are probably the most disposed personality-wise to being union members, but because unions generally gained a blue collar reputation, many GenXers (who in general see themselves more as engineers and researchers) have tended to see unions as being outside their socioeconomic class. Moreover, the consultant or freelancer mentality is often at odds with the "strength in numbers" philosophy of most unions.

I expect this generation to also end up much more in academia, especially on the technical and scientific side, or to migrate towards research, especially by 2020 or so as they finally reach a point where passing their knowledge on to the next generation outweighs any gains to be made by consulting. As is typical, the relatively inward looking GenXers will lay the groundwork for the very extroverted generation following after them - the Millennials.

Millennials were born after 1982, with the peak occurring in 1990, and are the children of the latter wave part of the Boomers (many of whom started families comparatively late - in their very late 20s, and had children until their late 40s). However, there's also an overlap with the children of the GenXers that creates a double crested population hump, with the trough in 1997 and then growth until 2007 (which actually exceeded the number of births per year of the Baby Boomers). After that, however there's been a sharp drop off to the extent that in 2012 the number of births is expected to approach the trough levels of 1971. For all that, the Virtuals (those born after 2000) will likely be a fairly small generation, given the drop off (most likely due to the economy's collapse).

The oldest Millennials are now thirty years old. Displaced by the gray workforce and facing the hardships in the economy by 2007, many started work four or five years later than in previous generations, had more difficulty finding work, and were often forced when they could find work to take MacJobs. They are distrustful of corporations, and are in general far more bound to their "tribes" -- connected over the Internet via mobile phones and computers -- than they are to work. Their forte is media - writing, art, film production, music, entertainment programming, social media, all of which lends itself well to the production house model, and which will likely mean that as this generation matures, it will end up producing the first great artists of the 21st century.

What it won't do is make them good workers in the corporate world, or in traditional blue collar positions. Overall, math and science scores for high school plummeted for the Millennials during the 1990s and 2000s, and enrollment in STEM programs in college declined dramatically after 2000 (when the Millennials started into college). Most Millennials are very good at communicating within their generation - this is the most "connected" generation ever - but overall tend not to communicate well with authority figures outside that demographic. (I've discussed this in previous essays.)

While I've seen some commentators who are critical of the Millennials because they see them as spoiled and entitled, instead, I'd argue that these characteristics are actually more typical of a generation that overall is just not heavily motivated by financial factors. Most have learned frugality after years of having minimal jobs. They will likely marry later and have fewer children than any generation before them, and their social relationships may actually prove stronger than their marital ones. On the other hand, they will also likely focus more strongly on their craft because of these factors, which means that as they age, they will prosper because of their innate skills and talents.

Temperamentally, the Millennials will tend to act in concert to a far greater extent than the generations before them. They will not join unions, but they will end up creating constructs very much like them. Moreover, they will be inclined to follow authority, but only if that authority is roughly in their generation. Consensus politics will be very important to them, and this will be the first generation that really employs a reputation economy as currency.

Given all this, it is very likely that the nine-to-five, five day a week job is going the way of the dodo. It won't disappear completely for quite some time, but the concept of a salaried employee will become increasingly irrelevant as the production house model obviates the command and control structure corporation. If you're still learning, you would get paid at a fixed rate plus time, but once you reach a point where you add significant value to a project, you would get points in the project towards a return royalty. Service jobs, similarly, will likely revert to a work for hire basis, coupled with some profit sharing. Manufacturing is shifting to a combination of insourcing with pod companies and artisanal production. Legal and accounting services, where they haven't already shifted to web-based delivery, are pretty much already done on a work for hire basis, with partners getting profit-shares.

The biggest changes that are taking place are in the sales sector. The rise of eRetailing is beginning to hit brick and mortar businesses hard. Christmas hiring at physical retail stores has been dropping consistently in the last five years, even as the economy itself has begun to recover. This is primarily because more and more retail is shifting online, to the extent that it accounts for nearly half of all retail activity in the United States during the last three months of the year. Mobile continues driving that as well, as it becomes far easier to "impulse buy" when your computing platform is constantly by your side.

The only real exception to this trend is in groceries and restaurants, though even there online purchases are accounting for a larger percentage of sales than a few years ago. Many grocery chains now offer online ordering and delivery services for nominal fees, up from only a couple specialized services a few years ago. Supermarket shopping is perhaps more ingrained in people than other retail shopping, so it is likely that this trend will take longer to play out there, but it is happening, especially in cities where grocery shopping is more complicated than it is in the suburbs.

Ambiance stores and restaurants are perhaps the only ones truly bucking the trend, and this has to do with the fact that most restaurants ultimately are as much about entertainment as they are about food. It's why there's been a slow death of the fast food industry, but places such as Starbucks do quite well. They are the modern day equivalent of pubs.

Note that I do not believe that such service jobs will go away completely, but they will diminish, and at some point it is often more profitable for a common to only be virtual and not maintain the costs of storefronts. No storefronts means fewer stores in malls, and already many malls are closing or being converted to other purpose buildings, while there are very few new mall or strip mall projects starting. Similarly, the number of "big box" stores has been declining as well. On any given day, go into an Office Depot or Best Buy, and what's most remarkable is how little traffic there generally is. Yet people are buying from their online sites, and the stores stay open increasingly to keep the brand alive in people's minds. At some point I expect these expensive "advertisements" to finally close down or turn into general distribution points, with only token merchandise on the floor.

This brings up the final paradox of the wage slave. The number of jobs being created is smaller than the number of jobs that are going away by a considerable degree, even in a "healthy" economy. These jobs are not being outsourced, they are being eliminated due to automation. The jobs that are being created in general require specialized skills, skills which used to be acquired via "on the job training", but increasingly these low and mid-tier jobs that provided such training are the easiest to automate, and hence are going away as well.

It is possible to train people some of these skills in the classroom, but the 10,000 hour rule of mastery generally applies - in order to understand a particular topic or acquire a given skill, it usually takes 10,000 hours worth of study, experimentation and practice to truly acquire competency in that area. In practice, this usually correlates to about ten years of fairly rigorous working with the topic. This means that while education is a part of the solution, the time required to impart that education can often make these skills obsolete.

The upshot of this is pretty simple - eventually, you end up with a large and growing percentage of the population that simply become unemployable. They are not lazy - most of them had positions until comparatively recently, but those positions are now gone. Meanwhile, profits that are made from the automation do not go to the people who lost the jobs, but the people who purchased the automation, and from there to the people who commissioned the creation of that automation in the first place. Put another way, productivity gains over the last fifty years were privatized, while the corresponding unemployment was dumped on the public domain. That unemployment in turn created emotional and financial hardship, foreclosures, a rise in crime and in the end a drop in the overall amount of money in circulation.

This last point is worth discussing, because it lies at the crux of the problem. In a capitalistic society, the velocity of money is ultimately more important than the volume of money in circulation. When money moves quickly through the system, more transactions take place, and in general more value is created in that economy. When money ceases moving, no one buys or sells, no investment takes place, no jobs are created (though many may be lost), and money becomes dearer, because you have a fixed amount - you can't count on additional moneys coming in, you can't get loans, even the simplest economic activity stops. This was close to what happened in 2009. As automation replaces work, billions of man hours of work payments disappear - money that would have gone to labor instead goes to the investors, who generally contribute a far smaller acceleration to the global economy than middle and working class individuals do in the aggregate. The wage-hour ceases being an effective mechanism for transferring wealth in society.

Eventually, a tiny proportion of the population ends up with most of the money in that society, and there is no way for the rest of the population to get access to that money to get the goods they need. We're not quite there yet, but the imbalance is already sizable and only getting worse.

One solution to this problem is to tax wealth that's not in use. This transfers money from wealthy individuals to the government, but given that government has become increasingly captured by those same individuals, the result of those taxes end up as corporate kickbacks to the same rentier class in terms of subsidies. Taxes can be reduced on low income individuals, but for the most part, low income individuals generally pay little in the way of payroll taxes, though they do pay in hidden taxes and fees arising from having to buy the smallest units of finished goods and services, which is generally the most expensive per item cost. Money can be distributed to everyone to spend, but the benefits of such stimulus usually tend to be short-lived, because the amounts are too small to make an appreciable difference in the same extractive mechanisms still exist in society.

Government mandated minimum wage floors can be set, but while this will help some, it is precisely these jobs that are most heavily impacted by automation. Moreover, the same corporate capture of the government provides a chokehold on the ability to impose such requirements on corporations. In effect the oligarchical control of the government continues to pursue policies that locally increase their profits, but at the systemic cost of destroying the consumer base upon which those profits depend. It is, in many respects, yet another example of the tragedy of the commons.

In many respects this is what the end state of a capitalistic society looks like - stalemate. Fewer and fewer jobs exist. Money becomes concentrated not in the hands of those who have jobs, but in the hands of investors, yet investment money is seldom sufficient to create a market, only to bring a product or service to that market. Wages become two tiered - bare subsistence level or below, and lavish for those with specialized skills, but only at the cost of continuous learning and training, and the concommittant loss of expertise as skilled workers choose not to share their skills at the risk of losing their marketability.

Because needs are not being met in the formal market, an informal or gray market emerges that is outside of the control of both the government and the corporatocracy, one with lax quality controls and legal redress in the case of fraudulent transactions, and consequently one where organized crime can play a much larger role. While this may seem like a Libertarian wet dream, the reality of such markets is typically like Russian markets in the aftermath of the fall of the Soviet empire, in which crime lords created monopoly markets where basic goods were only available for high prices or coercive acts, and where legislators and activists who tried to bring such crime lords under control were regularly assassinated.

So how does a society get out of this trap? My own belief is that in the end, it decentralizes. Power production shifts from long pipelines of petroleum based fuels to locally generated power sources - solar, wind, geothermal, hydrothermal, small nuclear (such as small thorium reactors), some local oil and natural gas production, intended primarily to achieve power sufficiency for a region with enough to handle shortfalls elsewhere in a power network. This provides jobs - both constructing such systems and maintaining them - and insures that energy profits remain within the region.

Establish a minimal working wage but also provide mechanisms for employees to become participants through profit-sharing and royalties, rather than options and dividends.

Make healthcare and retirement saving affordable and universal, rather than as a profit center for insurance companies and pharmaceuticals.

Tax financial transactions in exchanges, and use this to provide a minimal payment to individuals as a way of redistributing the costs of automation (and financial malfeasance) on employment.

Eliminate the distinction between salaried and hourly workers in the tax code, which has created an artificial two tiered system designed primarily to make it possible for unscrupulous employers to have a person work up to 39 hours a week and still not qualify for benefits.

Eliminate the 40 hour workweek - it's an anachronism. Instead, establish a government base payment that provides a floor for subsistence living for everyone, coupled with wage payments from jobs to fill in towards a production royalty payoff that provides wealth for people willing to put in expertise and effort.

Eliminate the income tax, and replace it with a value-added tax. The Federal income tax has in general been a disaster, increasing class warfare, often being used punitively by various administrations to favor one or another group, is extraordinarily complex, requires too much effort to maintain records for independent workers and small businesses, and usually being easily subvertible by the very wealthy, putting the bulk of the burden on the middle class. A value-added tax, while somewhat regressive, is generally easier to administrate, does not require that employees maintain detailed records, can be automated easily, and can in general be fined tune to encourage or discourage consumption of certain things within the economy.

Tax employers for educational support. Too many corporations want their workers to have specialized knowledge or skills, but in general do not want to pay for the training. Some of that tax can be in kind knowledge transfer from people that do have those skills in those corporations , at which point the corporation pays for that employer/contributor to teach.

Similarly, tax employers for infrastructure support that directly or indirectly benefits them. Much of the last half century has seen the maxim of privatizing profits and socializing costs become almost holy writ, but this has generally resulted in ghettos and gated communities that benefit a few at the expense of millions.

Encourage telecommuting and virtual companies, while taxing those corporations that require large numbers of employees onsite at all times. If telecommunication tools were good enough to outsource to China, they are good enough to provide telecommuting. This generally has multiple benefits - less need for infrastructure, far fewer carbon emissions, less energy consumption, less time wasted in traffic, fewer monster skyscrapers serving as corporate shrines.

These changes (and others like them) are feasible, but in general will only work if they are attempted locally - at the state or even city level. These are transformative changes - as different regions attempt these, facets that work and don't will emerge, and local variations will no doubt come about based upon cultural temperament, but overall success will beget success. Demographic changes, as discussed in this essay, will accelerate this process - those regions that are already investing in twenty-first century technologies are already doing a number of these things, and are seeing benefits, but those that are heavily petroeconomically bound will resist them. The irony here is that this means that in these latter areas, the wage slave paradox will only get worse, and the economy more dysfunctional over time.

It is likely that thirty years from now the economy of the United States will look very different - mass customization through additive printing techniques, millions of virtual pod corporations that number in the dozens of people only distributed all around the country (and probably the world), cities that will be in a state of controlled disintegration, powered locally and with much more local autonomy, with the rise of a strong creative class supported by an elderly engineering class and a youthful research cadre. None of this will happen overnight, nor will it happen uniformly, but I feel it will happen.

December 21, 2012

The End of the World Didn't Happen Today

Just one of many ways to bring it all to an end.
On this day, according to the Mayan Calendar and the hordes of New Age experts who make their living looking for such portents, the world will have ended. Again.

We like the end of the world. Hundreds of millions of dollars a year go into exploring various and sundry ways the world will end - in TV shows, movies, video games, novels, even serious conferences. Asteroid strikes, tsunamis, earthquakes, black holes, rogue planets, expanding suns, supernovae, killer biohazards, the plague, nuclear war, zombie infestations, rogue weather, Chthulu-esque demi-gods, vampires vs. werewolves, strangelets, divine retribution, global flooding, alien invasions, Nemesis, Aphosis, false vacuum phase shifts, brane collisions, it's rather remarkable just how many ways there are to turn out the lights, once and for all.

There's something eminently satisfying about going out with a bang, like the dinosaurs did when an asteroid slammed into the Gulf of Mexico 65 millions years ago. Except they didn't, really. Oh, no doubt there were quite a few dinosaurs for which that fateful collision truly was the end of their world. However, the day after the asteroid, there were still quite a few T-Rexes wandering around - a bit dazed and confused perhaps, but they still managed to successfully take down a stegosaurus or three for breakfast. They were there a week later, and a month ... indeed, by all indications they were still going two or three million years after Game Over.

What ultimately did the dinosaurs in was bad weather. In India the collision of a breakaway piece of Antarctica and the Asian subcontinent had caused the crust to become particular thin over a hot spot region deep within the Earth's core, and it opened up a whole series of volcanoes, initially cooling the atmosphere with all of the sulfur being released, but ultimately warming it again as nickel, normally held deeply within the core of the Earth, made its way in great concentrations to the surface. As it cooled, the nickel provided a critical substrate necessary for the flourishing of a form of methanogen, a methane consuming microbe that generated copious amounts of carbon dioxide. In the end, the planet became too hot for the plants which fed the huge appetites of the stegosaurs which in turn fed the t-rexes, and the giant dinosaurs that needed vast amounts of food to support themselves ultimately ended up starving (or more likely dehydrating) to death. Meanwhile, the much smaller mammals and tiny dinosaurs that could get by on a miniscule fraction of the food survived, the mammals by burrowing and hibernating, the dinosaurs by taking to the air. This happened over the course of two to to three million years, still relatively fast by evolutionary standards, but far from the "death raining from the sky" eye-blink that makes for such good cinematic fodder.

 We like "game over" endings. A good ending makes for a satisfying read, and a poor one, one where too few threads get tied up, makes us feel dissatisfied with the work. You want the villain to be dead at the end - want so that he can't get back up and menace the heroes one more time. You want the prince and princess to get married to resolve that awful teenage-angsty hormone-driven sexual tension, so they can go on happily with the rest of their lives. You want the war to be over. A good story builds tension, and at the end of the narrative that tension needs to be released and resolved. Life as orgasm. Even when the ending is horrific, one where everyone dies a particularly grisly death, the desire for closure is stronger.

Ironically, a part of this has to do with the implicit assumption on the part of the reader that, by hearing the narrative, they will in the end be a survivor. They will be alive to tell the tale, not rotting in an anonymous grave somewhere. The fact is that, every day, it is the end of the world for somebody, but in all but one case, those somebodies are not you.

However, great closures are also critical for societies overall. The US Empire is in decline. It has been for several years. Historians, who are masters of the narrative, are already looking for the smoking gun, the one event that definitively says that the Third Age is over (to borrow from the recent Tolkienesque interest) and the Fourth Age is begun. They're looking for the day that Gollum bit off Frodo's ring to recover his Precious before tumbling, fatally, to his doom in Mt. Doom. (For a man who invented two complete languages, Tolkien was remarkably inept at naming mountains). 

On this day, the bad guys are vanquished, and the good guys can start building something new again. Yet today it's hard to tell what that "something new" is - or rather, it's easy to tell, but hard to chose from the plethora of something new's that are currently in vogue. For the Libertarian, that something new is a society where the intrepid hero defeats the evil government to become a master of his own fortunes. For the Liberal, that "new" is a world where oil is no longer pumped from the earth, where we live in harmony under a benign government of the people, at one with nature in our tree-enshrouded sanctuaries, away from the gun-toting yokels and religious nuts. For the Fundamentalist, the "new" is a world where a benign god looks once more upon His people, bringing them peace and prosperity while the evil unbelievers burn forever in the pits of Hell, in the ultimate of punishments.

Curiously enough, the villains in one person's narrative are the heroes in another. This again brings up the problems of narrative tension. An arbitrary apocalypse in the vast narrative always favors the listener's own tribe in the same way that it favors themselves. Your tribe will remain, if scattered and sorely beleaguered, while the evil tribes will get theirs. Those few that remain will see the wisdom in banding with your tribe and your way of thinking, at least in the main.

However, life is seldom that neat, and endings, when they do come, are seldom swift and absolute. Instead, the visible signs of a transition, a change from one social regime to another, are usually symptomatic of broader but generally less immediately tangible changes. We're hitting resource peaks in the first half of the twenty first century that will have major ramifications for the next three or four hundred years. Climate change will cause various regions to lose or gain economic and hence political power. Our economic system is in flux right now because the foundations of those economies are shifting, both due to the aforementioned resource peaks and to the innovations that we have unleashed in the last century. 

We have an unprecedented degree of understanding both about we do and what we don't know about the universe, and the transition from physical discovery to materials engineering to commercialization is occurring in a breathtakingly short amount of time. Our ability to innovate with our economic systems is also unprecedented, and this in turn means that we can make economic experiments (meaning mistakes - offshoring, anyone) and recover from them within a surprisingly short interval.

Yet the need for narrative is still there, and that is perhaps the challenge of political, social and economic innovators moving forward. For too long the narrative has been that the story is coming to a close, that the survivors will be ones with the greatest amount of money, land socked away up in the mountains, arsenals of heavy machine guns waiting for the coming zombie hordes. What's so disturbing about this particular narrative is that the zombies in question are thinly disguised latte-sipping urban liberals, drinks in one hand full of rotting milk and coffee; the fear being that the world really is coming to an end, the cities with all of these people with their big government regulations and reprehensibly open social policies (women's rights! gay marriage! unions!!!) are going to overwhelm the god-fearing farmers and ranchers of the Real America.

Ironically it is a narrative that's also promulgated by the suburban financiers and senior managers - the ones that may work in downtown New York but have a home in the Hamptons, or that control their empires from Dallas but are driven in by chauffeur from the Park Cities or Lakewood. They too fear the zombies, but in this case the zombies are the undesirables that will drive down home prices, will cause cracks in the illusion of absolute mastery that they maintain around themselves. These are the people most invested in the status quo, the ones that see the visions of sustainability and lower economic inequity as a direct threat against their own wealth and station. They are concerned about the New Money, because New Money often comes from undermining the paradigm that helped establish the Old Money in the first place (which was itself once New Money), and today that New Money is increasingly coming from the young, technically competent engineers, scientists, creatives and advocates who recognize the dangers and limitations of the status quo. At one time, this force was helping to prop up the Old Money, but as times and technologies change, the gulf between these two forces widen.

In a way, the younger generation is shaping its own narrative, one that's increasingly at odds with the status quo. They see the future and are worried by it, which means they are adapting far more quickly to it. A winnowing process is going on, one in which the most salient technologies are enhanced, while the less salient are diminished. Biotechnologies, information science, nano-engineering and alternate energy development are all critical. As a generation they have less use for corporate religion or giant conglomerates - they view businesses simply as vehicles to apply capital to solving problems, and view religion as being increasingly private and self-directed. They drive less, and are far more comfortable working and playing with people that may be thousands of miles away than their predecessors. Their mantra increasingly is that too much power in the hands of anyone - government or business - is bad, and are becoming increasingly proficient with the ability to make decisions collectively with astonishing speed. These people do not respect existing institutions, but instead see them as being relics of another age that are no longer germane to them.

For these people, the end of the world is nowhere in sight, other than as an excuse to throw a good party and an opportunity to remake the world according to their own narrative. To them, this is exhilarating, to others, this is terrifying. In the end, though, they will be the ones writing the next chapters. For now, it is perhaps best to know that this grand story is ... to be continued ... 

December 12, 2012

Decentralizing Society

Scotland, Germany, Iceland, Denmark, Finland - one by one a very subtle shift is happening in the world, something that I think will become a much bigger factor in the decades ahead. Each of these countries is attempting to achieve energy independence by moving as much of their energy production as possible into renewable power sources. In the examples cited above, the reason for such migration is as much geopolitical as it is concern for the environment - these countries (I'll get to Scotland in a second) are in a situation where they do not have many of their own carbon energy resources, and consequently, are especially dependent upon other countries, ones that historically they have had occasionally disastrous relationships with in the past.

Iceland's an interesting example in several different ways. During the collapse of 2008-2010, Iceland did something unprecedented. Saddled with supposedly safe debt that "exploded" on them, they rejected austerity, arrested and prosecuted the bankers, nationalized the banks, and repudiated their foreign debt as being unpayable. In doing so, they were forced into a situation where they could no longer get letters of credit for huge oil purchases, so they began a crash course in becoming internally sustainable. One of the first things that they did was to re-evaluate their internal energy profiles and recognized that they had a wealth of energy from geothermal and hydroelectric sources - the energy inherent in hot springs, geysers and melts from glaciers. Taking advantage of this, Iceland's renewable energy resources make up 81% of the total energy production from the island country, with the balance coming from North Sea oil.

The economic news and the energy news are not unrelated. The petro-industrial complex is intimately tied into the financial services sector globally, and indeed, many of the aspects of globalization, from outsourcing of jobs to 5,000 mile salads to the explosion of the 0.1% globally in terms of overall wealth owned, are intimately tied to the retrieval, transportation, distribution and consumption of petroleum products. Iceland chose to drop out of that web for a bit, and in the process are beginning to worry financiers in New York, London, Berlin and elsewhere.

Scotland's driver is a growing desire to separate themselves from the political control of London. They have similarly made 100% energy independence a major part of this process, because by no longer being dependent upon the North Sea oil well (which is showing signs of playing out), they end up with much greater autonomy in other matters.

Germany, ironically, is a financial powerhouse, but much of that is built primarily upon engineering services and manufacturing of precision goods. Their overriding concern is maintaining independence from Russia and its oil and natural gas production, and to achieve this they are betting heavily upon solar and hydrothermal technologies.

Moreover, they are treating such energy production in a paradigm shattering way. Their goal is not to replicate oil production, but to look at their infrastructure a piece at a time and figure out how to make each piece effectively fuel itself. Projects there include using genetically modified algaes that not only are especially good at filtering waste water, but that generate energy as a by-product of doing this. The energy produced isn't huge, but it is sufficient to generate the power to run the plant and push some back into the grid.

Similarly, solar panels are becoming so much a part of the German landscape that in many towns there are few roofs that don't have them - and this in a country that has a disproportionately high number of cloudy days. The irony is that Germany is now producing so much power that other countries that are Germany's power grids are becoming overwhelmed because Germany is producing more power than it can use and is dumping that energy downmarket on unprepared grids and bringing these down.

The thing that these countries share is that they are relatively compact, are already affluent, and have strong external (typically security) reasons for achieving such independence. For the US overall, this is generally not the case, and this is frequently an argument given on the part of the petroleum industry and their supporters about why alternative power is such a pie in the sky dream in the US. However, these arguments (when not trying to argue that global warming is only an illusion) usually assume that complete conversion of petroleum to technology X is infeasible because petroleum is far more effective and the infrastructure to upgrade the entire country would be absurdly expensive to replace.

In practice, however, this is where the paradigm of self-supporting infrastructure makes so much sense, and why, in many ways this conversion is already taking place. Forget about total conversion, finding a one-size-fits-all magic bullet (seriously mixed metaphors there) that will replace the petroleum economy overnight is simply not going to happen in the US. What can happen, however, is the notion of making infrastructure self-supporting.

Much of that technology already exists today. You can get an intelligent security monitoring plus power management system for your house for $50/month from cable companies that will let you control the outlets, air system and appliances in your house from an Android or iOS app located anywhere. Throw in the next generation LCD lighting systems, add in a solar collector for your roof, and your house becomes a net neutral environment. Put all the street lights on local solar cells, start tapping into geothermal as well as hydropower, solar PV and wind-powered systems for municipal structures such as government buildings and schools, and these too start disappearing from the grid. Malls, which have traditionally been huge energy sinks, are either being shut down or taking advantage of large expanses of parking spaces to erect solar panels to become self-supporting. Trains, especially light rail and subway, can take advantage of flywheels located in the stations themselves to extract power via induction to slow the trains down, then can then give the same trains an induction based boost to get out of the station, reducing it's overall energy footprint by 60-70%.

The same principle applies increasingly to work. One intriguing trend is the re-tollification of paid-for highways. Municipalities are assessing tools on previously free roads, which is having the unexpected side effect of encouraging telecommuting as employers are forced to question whether having employees do hour-long commutes in order to be in the same office is worth the wage increases that will be needed to cover these commute costs (in effect, most commuting to and from work as well as parking costs have been pushed onto the employees, when this is in fact a requirement imposed by the employers, and employees are pushing back on this).

Similarly, the very technologies that allowed outsourcing - including cloud computing and applications as service - are also increasingly making insourcing more attractive as the pendulum swings in the other direction, because such insourcing is still distributed, but over a more manageable geographic region. Monitoring and troubleshooting as often as not now occurs on distributed systems on the cloud, so having a lot of engineers located in the IT "server" room is now "so 90s" - the room is no longer there, the network admins all have their iPhones and iPads configured to notify them the moment an error condition gets fired, and most of those apps are increasingly running on Amazon or Google or other cloud providers. Managers work from home, marketing people produce ad copy and visuals by collaboration, and most meetings are now down through GoToMeeting or something equivalent.

Why does this matter? Every virtual meeting is five to ten less trips downtown, or perhaps five to ten airline tickets. This puts fewer cars on the road, which decreases the energy footprint. Automated toll systems can also be tied in to financial banking networks and hence audited, making it possible to determine who pays for driving. Insourcing also reduces the number of cargo ships on the seas, each burning hundreds of gallons of oil an hour, and reduces the amount of air traffic.

Yet the argument would be made at this point by those invested in the status quo that fewer shipping or aircraft trips represents that many fewer jobs - fewer airline workers, fewer stevedores, fewer truckers. They're right, of course, it does. And here is where things go all political. Ultimately, something has to give. The future has arrived - all of those labor saving devices, all of those robots, all of the efficiency generating software and infrastructure ultimately implies that the number of hours of meaningful work is in permanent decline. There will be occasional spikes and probably a floor at some point, primarily in the services sector, but even with jobs moving back home you need 1 person for what required 100 a century ago in the manufacturing sector, and increasingly even the financial sector is beginning to look anemic as trading algorithms replace the Masters of the Universe, just as large scale search databases have significantly dented the legal and medical professions.

Ultimately then, the question is how you resolve this fundamental contradiction - providing a means for the distribution of value in a capitalist society to the largest percentage of people when the most traditional mechanism - wage labor - no longer provides that capability. I'll address this issue next week.