May 7, 2009

Future Proof: Kurzweil Cities and Kunstlervilles

According to a number of serious and well-intentioned articles written in the last century (not to mention the Jetsons), by 2010 we were all supposed to live in giant cities with mile-high sky-scrapers, flying cars, pneumatic tube transit, and robot servants. Admittedly, this view gave way to a somewhat more dystopian version around 1980 or so - think Blade Runner, for instance - in which overpopulation, over-urbanization and technology gone wild created a dark and threatening world.

Perhaps one of the biggest exponent of the power of technology to change the world for the better is Ray Kurzweil. Inventor, author, visionary, Kurzweil's central thesis has been that as Moore's law continues in its seemingly unrelenting progression, humanity and the machine will become ever more indistinguishable, and that ultimately there will be no problem that can't be solved with the suitable application of intellect.

At the other end of the spectrum for Kurzweil is James Howard Kunstler. Kunstler has been writing about the relentless spread of urbanization and the problems that will occur as systemic shocks - peak oil, peak water, climate change, aging populations and so forth - cause profound changes to the way that we build our cities, ultimately resulting in the destruction of the suburbs and the end of technological society as we know it. His vision of late 21st century life looks a lot more like the mid-19th century, and it is, curiously, surprisingly appealing even in its starkness.

I've dubbed these scenarios Kurzweil Cities and Kunstlervilles - views of optimism and pessimism that, when looked at through somewhat different lenses, could just as readily be interchangeable in terms of their values as utopias vs. dystopias.

As many people have noted, cities are organic over a large enough period of time. They exhibit emergent behaviors that seem eerily similar to the way that lower order life-forms act. They grow in response to available energy sources, expanding outward as energy enters the system, contracting back in on itself as energy leaves. Highways and streets are the arteries, carrying car and truck corpuscles from one part of the city to another. The nerves are the power and information conduits within the city. When cities collide, they either form systemic cells or absorb one another, the older former towns slowly losing their distinct identity over time.

This metaphor, or abstraction, is an important thing to keep in mind when looking at the future of cities. Cities grow in response to increases in population. This may seem obvious, but I'd contend that its actually a very subtle point - the larger the population, the more likely that the necessary number of interactions can take place to push the city to a new level of abstraction, while at the same time the greater the drain on the energy resources available to that city. In a city where the energy drain is higher than the energy sources (where energy can be physical energy such as electricity or the abstraction of energy in the form of money) the quality of life in the city drops - there are fewer job opportunities, the standard of living goes down, the government becomes more authoritarian, the ability to support urban services decline.

Technology cannot create new energy - it can only make it possible to use existing energy more efficiently, and always at the cost of powering the technology itself. This has always been the fundamental flaw of Kurzweil's vision - Moore's law does not come for free. Every generational doubling in processing power occurs because more energy goes into the technologies to make it happen. Costs for fabrication plants for creating new microprocessors increase geometrically as well - the cost to create a typical fab is now well into the billion dollars category.

The problem that's led to the current crisis is that the energy costs here are borrowed. Intel or AMD generally doesn't have anywhere near the amount of cash on hand to build new fabs. Instead, it borrows the money against future earnings - it is in essence borrowing energy that hasn't been created yet.

This future borrowing has been endemic in US culture for a long time. Cities (and larger geopolitical structures) generate their revenues in one of a few ways - they either take possession of a power or resource source and sell from that, they receive revenues from the state (which simply pushes the problem up a level of abstraction), they place a tax on the current revenues of its citizenry and associated companies, or they create bonds to borrow from the future earnings that the project in question will produce, adding in a premium in order to compensate the bond holders for the potential risk of the bond defaulting.

When the weight of such credit exceeds the potential of the system (as it exists) to pay back those loans, the system collapses. That's what is happening now. The system is becoming less energetic, and as such, the ability of the system to support its abstractions is diminishing. The US government is working diligently to prop up the system, but it's constrained by the same problems - any money that it creates is a promissory note on new energy production, despite the fact that energy production in the US has been declining since the early 1970s. It may be able to sustain the status quo for a while longer - but the next crash will likely be harder.

So is Kunstler in our future then? Not necessarily. Kunstler's central thesis is that an oil-dependent economy will eventually lead to collapse as the supply of oil continues to diminish relative to demand. Oil is important, both as a fuel source and as a resource for production of goods, but its important to differentiate these two use cases. The principle use of oil today is for transportation, moving things from point A to point B. If you can switch cars over to electric or electric-hybrid use this will significantly reduce demand on oil - perhaps even to the point where US production can easily accommodate all other uses of oil. Flywheel systems, and shock kinetics also add potential power, especially for larger vehicles that have more intrinsic momentum).

To do so, however, other changes become important. Electricity production needs to become more distributed. Efficiencies in solar power production are raising the possibility that cities can actually become net power producers - both with regional power "farms" and solar enabled houses and businesses. Beamed power - in which solar power collectors in space are used to create coherent microwave beams that can provide power to collectors even in cloudy areas, could dramatically increase capabilities. Geothermal taps, wind power, wave power and more efficient super capacitors make energy production in coastal areas more feasible. More efficient monitoring and routing of power (smart grid) can also insure that energy is made available to those places that have the largest demand, rather than getting wasted.

There are even places for such technologies as nuclear fission plants, which, despite the publicity of both Four Mile Island and Chernobyl, are generally much safer today. The principle problem here comes in taking care of the high upfront costs and the still troublesome waste disposal issues.

What this implies however is that the future will likely be neither Kurzweil cities nor Kunstlervilles. Instead, for a while it will be a mix of both - cities that can most effectively harness net energy production will thrive and grow, and the standard of living there will improve. Cities that can't will sink into slums and abandoned neighborhoods, crime will rise and people who can afford to leave will for those places that offer better standards of living. The dominant cities of the twenty-first century will be the ones that make the transition first, and it is likely that these cities will also end up creating stronger regional trading blocs that circumvent political boundaries (a case in point would be the Vancouver/Seattle/Portland corridor, which has the potential to become a cohesive political entity as energy and resource systems merge, despite crossing both state and national boundaries).

Indeed, this last point is worth reiterating - political boundaries may be conservative, but they also eventually snap in the face of energy flow structures. Regional trade and energy blocs are comparatively new abstractions, eddies along the older nationalistic boundaries. They will gain in cohesiveness over time, eventually overshadowing older nationalistic boundaries altogether. This means that, again taking the case of "Cascadia", while inhabitants of Portland, Seattle and Vancouver will continue being citizens of their respective states, provinces and countries, they will increasingly think of themselves as being Cascadians as trade and energy alliances build.

Ultimately, the cities of 2020 or even 2050 will likely end up being not that different from today, at least on the surface, though individually they may look quite different. Some places, like Detroit, may not even exist - it was conveniently placed in the 1920s through the 1960s to bring together the raw materials, energy sources (from Pennsylvania oil) and cheap labor to mass produce cars, as well as conveniently placed to distribute them. None of these factors are in play anymore, so the city is dying.

On the other hand, when a city enters into this mode, it is also, ironically, at its most fluid - investment terms are favorable, it's easier to raze dead neighborhoods, townships that are tethered to the city are able to break free and make more effective decisions at lower levels of abstraction, Detroit in 2050 may very well be a network of independent towns, each powering their own subgrids, each producing its own own products and services. Education and the arts may may very well be growth industries by that point, with energy production subsidizing the initial costs, and this is perhaps the real lesson to be gained from Kurzweil and Kunstler both - by moving off the oil grid, by moving away from a caustic and self-defeating consumerist culture, it may be possible that both scenarios come true; the future is a region full of universities towns and centers of learning and the arts - urban enough to bring together the necessary confluence of people but rural enough to sustain the agriculture basis of the region. I could live with that.

Future Proof: Freelancer

I have been a freelancer for most of my working career. The specific jobs vary, of course - I've been a freelance writer, a freelance journalist, a freelance programmer, a freelance information architect, a freelance trainer, a freelance teacher - the list goes on and on. While there is a standing joke that freelance is another word for unemployed, I'd definitely have to disagree there ... I have had years where I've cleared six figures as a freelancer, though there have also been a few years where I've made just barely above the poverty line.

There are certain professions that lend themselves well to freelancing, most of them in the information sphere. Programming is a natural - projects have a beginning, a middle and an end. After the project is done, you may or may not need the skill-sets of the person involved. Ergo, freelancing. Writing is another intellectual pursuit that has a definite terminus. You teach classes for a quarter or two, but unless you're tenured there's no real advantage to keeping a teacher when all you're looking for is someone to impart this particular wisdom at this time.

There are similarly professions that don't lend themselves well to freelancing, though they are becoming rarer. Indeed, as I'm writing this, I'm scratching my head about what professions can't be done in freelance mode. And that, in a nutshell, may be the problem.

Full time work makes a great deal of sense in an industrial society - the need for producing X number of widgets per hour every day means that you need to have labor there every hour of that day, you need managers for that labor, then need managers for the managers. The cost of disruption in that labor is high - if someone quits then you have to get someone else trained up in that new role, and you have holes moving through the organization until you can find someone from the outside. This translates into significantly reduced productivity.

Most of the "benefits" provided by business have their origins in this mindset as well - health care originally made sense by having an onsite doctor or working closely with a nearby hospital, in great part because it made sense financially to insure that workers had as few disruptions due to illness or injury as possible. Pensions (and later investment vehicles) similarly emerged as a way of keeping employees long term - people were far less likely to quit for a competitor (and take potentially valuable information with them) if the company held on to their retirement savings. In general, retention was the rule.

However, today, this process is going in reverse. Businesses are disaggregating. Conglomerates are selling off or IPO-ing divisions because the costs involved in a large labor force are increasingly outweighing the benefits. Health care costs are skyrocketing as the workforce grows older, as the multiple layers of "managed care" extract ever larger portions of the pie, and as fewer doctors and nurses enter the field. Pensions had long been something of a running joke - a borrowable pool of funds for the company that was often used to invest in fairly risky investments, and as those investments failed to play out, companies are now faced with new retirees asking for their pension funds just as those funds have been wiped out by malinvestment and mismanagement.

What makes this worse is that the technologies are increasingly in place such that people no longer need to be in one place to work, and that if a person does in fact leave they seldom have the same negative impact on productivity (in the short term) that they once did - even if that person is a stellar performer. Longer term, of course, losing those start performers can be the death knell for a company, but the difference is that the impact is seldom felt for a while.

Thus, for many companies, the ongoing recession is a chance to reduce their existing obligations - purge their full-time ranks and then, as people become more desperate, rehire them on a contingency, freelance or part time basis. From an accounting standpoint, this is the best of all possible worlds - you don't need to pay for ever-increasing health care, don't need to make contributions into a pension plan that you know will never actually be fully capitalized, don't have to dilute existing stock, can hire more people when demand rises and can then lay them off when demand falls, either on a project basis or over the course of a general economy's rise and fall.

However, from the labor standpoint, this is also the worst of all possible worlds. As a freelancer, you are essentially running your own business, but almost invariably without the level business support that corporations routinely have. You become responsible for your own health care, for doing your own taxes (and usually get taxed at a fairly high premium for being "self-employed"), for your own retirement. Work becomes episodic and sporadic - you either are searching for new work or you are facing a glut that you can't fill, but you don't dare outsource it because you need the money to tide you over in the lean times.

Most independent freelancers compensate for the sporadic nature of the work by charging a premium for services - a contractor should, in theory, cost a company more than a full time employee short-term because the freelancer is paying for his or her overhead that would otherwise be paid for by the company. However, in practice, unless you are highly specialized, you are competing with a (currently growing) pool of similar contractors which mean that companies can effectively bid on competing contracts to keep these wages low.

This practice is exacerbated by agencies, which usually end up acting as a buffer between the freelance labor force and companies. Most of them may offer very short term benefits for the duration of the contract - minimal health care policies, for instance, that the employee usually has to purchase - but usually nothing beyond that. In exchange for that, they absorb that 20-30% pad that freelancers would otherwise save up for down-time, meaning that from the hiring company's standpoint, the labor is still expensive, but can be let go at a moment's notice without significant contractual problems - and because the agency itself can cap the wages, the wages are still less than they would be for an independent contractor.

Currently 29% of the workforce in the United States is contingency contract, up from 24% in 2005. That includes both part time workers (those that are deliberately held below the minimal 35 hour line that costitutes full employment) and freelances who may work 40 hours or more a week but are hired on a temporary basis. It's likely, as companies continue to shed jobs that this will grow to between 33 and 37% by 2015, meaning one in three people will be working outside of the established "safety net" of full or salaried employment, including a rising percentage of professionals - management executives, medical practitioners, financial services professionals, lawyers, engineers, marketing and communication specialists, designers, system architects and software developers, along with the whole plethora of "creatives" - artists, writers, musicians and so forth.

Of those, roughly 70% are female, which reflects less upon a bias against women (though that's there too) and more on the fact that women have entered the workforce more recently than men, are more likely to be in information-centric careers and are thus perhaps more indicative of future trends than men are. It's worthwhile noting that the percentage of contingency workers under the age of 40 is also much higher than it is for those older than forty, though how much of this is due to structural changes in the workforce vs. the fact that younger workers are more likely to have fewer commitments that make contingency work more attractive is hard to tell, save that the under-forty contingency percentage has been creeping up steadily for decades.

The question for policy-makers is what to do about it. First, its worthwhile to note that there is a world of difference between a freelance lawyer or programmer who has specialized knowledge and can usually afford to handle insurance, taxes and retirement savings and the part time Walmart worker who likely can't - and for clarification, I'll refer to the first as freelance workers and the second as contingency workers.

The freelancer in general should bite the bullet and incorporate as a small business, and press for better legislation to provide more legal rights to these microcorporations. I see this ultimately happening, especially as the number of web businesses rise - businesses that have significant "virtual" presence, but that may represent a group of one or a handful of active partners. Overall the IRS has taken a dim view of such small organizations, but they represent the bulk of all new incorporations, and as the force multipliers of technology have increased the ability of such small companies to have an oversized presence, it's likely that most of these businesses will stay small, people wise.

Unfortunately, contingency workers may not be as well positioned. In the 1930s, Franklin Delano Roosevelt worked with the Federal Reserve to create a plan to put Americans to work long term - an decision was made to allow for a certain amount of inflation in the monetary base in exchange for full employment. In essence, every year the monetary base was allowed to grow by between 2% and 3%, which devalued the dollar by a corresponding amount. Prices rose, and with them real wages dropped, but as more people were entering into the system at that point than leaving it, it meant that people entering the system were actually making marginally less that, in the aggregate, freed up a lot of capital, which in turn was used for starting new projects and hiring more people while keeping people happy that their wages (at least on paper) were stable or growing slightly.

This worked for a while because it was a Ponzi scheme - so long as the work population itself was growing, such a model was sustainable. However, in the last eighty years, the demographic pyramid has inverted, and there are now more people near the end of their career than there are starting out. Add into this the effects of technology in making work more efficient, and you get the rather ugly situation that we're in now - a situation where you have more people who are staying in the workforce at the higher wages that their skills and experience should support (and who are desperate now to refill their coffers after the last couple of years) and fewer people at lower wages that support the Ponzi scheme.

What this means is simple - many of the jobs that are being shed now by business are not coming back. Middle management has been hemorrhaging for the last two decades because the value provided by these managers is no longer as critical. Retail sales jobs are disappearing at a rapid rate as retail centers collapse in the face of low demand and Internet distribution. Manufacturing looks to be in its death throes in the US (if the bankruptcy of Chrysler and GM are any indication), and it is likely that moving forward the jobs being replaced will not be at the upper end but at the lower with new designers and engineers who are more familiar with cutting edge tools, fabrication methods, and technologies. Construction will likely be at an ebb for the next decade. There are more marketing people out there than there are markets, and again as new jobs do arise, they will be in areas where you have the young and savvy rather than the experienced.

What makes this worse is that even in those areas where growth may occur - health care, energy production, high speed rail, education and the like - you are generally going to be talking about specialist jobs requiring long term training - and an education/training system that is still bound up in a large corporate model. What's more, even if the education did exist, the absorption rate for these professions is comparatively small - you need more doctors, for instance, but if even one half of one percent of the unemployed work force were to go back and get medical degrees, it would easily swamp the field.

This will set in place the great forces of the next decade. Each recession is likely to result in higher unemployment than the previous one, while each recovery will see a smaller percentage re-employed. Against this will, paradoxically, be the growth of spot shortages in the labor markets in specialized areas - those that are capable of going freelance and are successful at it will end up creating loci of specialized job growth, but the growth will remain limited.

This is always a dangerous mix for political stability. Shadow economies usually emerge once you get a certain level of unemployment - people still have an imperative to survive, and will do so any way they can. Drug trafficking typically rises during recessions, even as prices for those drugs fall, because drug dealing provides not only income but organizational structure (albeit very dangerous organizations). Prostitution rises as well for much the same reason. Both left wing and right wing paramilitary organizations usually tend to do quite well during these periods, providing both places to live and organizations to be a part of, and such organizations, while possibly carrying out political agendas, usually provide "security" services to that same underground economy. The Internet in this case will likely only hasten this process; it is very easy to set up online communities and exchanges that can't be easily regulated, taxed or even monitored. As people become more desperate, expect that barter and trafficking on these sites will increase dramatically.

On the other hand, its also likely that a virtual side of the shadow economy will show up in online games and other environments. Already, there are people that are making a living either producing goods and services in games like World of Warcraft or Second Life, are playing automated gambling sites, or are fully engaged in eBay or other online markets. The irony here is that while this market is likely growing dramatically, its metrics are so different from those of the "real" world that it's hard to tell how many people who are technically unemployed are actually making a living there.

Note that these are also freelancers, though they don't show up in official measures as such - and there's a lesson to be learned from this. Over the next decade you're going to see a generation grow up on the Internet, learn to make a living there, and develop an entirely new conceptualization of business there. They're growing up in a grey area that's neither "corporate" nor governmental, becoming very entrepreneurial while at the same time working outside of the bounds of contemporary business.

Many (and certainly the best and brightest) of these younger men and women are going to grow up with nothing but disdain for the modern corporation. The more that they establish themselves on the Internet, the less likely that they are going to put up with office politics, small cubicles, long commutes, and the increasing uncertainty of job stability in an organization that could cut 10,000 jobs in one fell swoop. The talented ones will be on the cutting edge, creating new virtual company after virtual company, each staffed with perhaps a couple dozen people tops that communicate with one another from remote locations, each company with a killer product or idea that will chip away at market share of conglomerates piece by piece.

When the economy does improve, this generation will not come to work for the old corporations. The smart companies will change in response. Most won't. Many of these companies will sink into irrelevancy, no longer able to tap into a mindset that is radically different from anything that the senior managers can even begin to imagine. These people will have become used to starting with next to nothing and being exceptionally frugal - they will be anti-consumerist, highly innovative, and with very little use for traditional social structures.

Hiring managers, beware. The freelancer is about to take over your business.

Swine Flu: End of the MBA Farmer?

While there are legitimate questions about the potential severity of swine flu, it is still a dangerous flu for a simple reason - most flu viruses in circulation are very minor variations on existing strains, which means that most people who get the "flu" end up with symptoms that have more to do with histimine reactions - runny eyes and nose, aching joints, maybe a day in bed feeling lousy - then they're past it.

Swine flu, otherwise known for its genetic markers as H1N1, isn't an existing, commonly circulating flu. It's relatively new although with very old antecedents, which means that most people have no immunity to it. This means that it will likely spread remarkably quickly, will leave a significant portion of the population sick with it, and could prove to be deadly even for adults.

What epidemiologists are discovering about this particular flu bug is very disturbing - first, that it is a variant of the Spanish Flu virus that accounted for more deaths worldwide than World War I, which was waging at the time. Spanish flu was extraordinarily virulent, and when it finally died out, it became very quiescent - effectively disappearing altogether from the cloud of seasonal viruses that normally lay people low in late winter.

However, in addition to this, it now appears that the term Swine Flu is more apt than was even apparent on the surface - Swine flu itself first appeared in hog factory farms in the 1990s, mutating rapidly in the high density "population" of pigs kept in tiny pens little larger than the pigs themselves. The flu wasn't lethal for pigs, and the particular strain of swine flu that did jump to humans was of a variant that didn't "catch", failing to reach critical mass or virulence to be a true pandemic.

The early 1990s also saw the graduation of a crop of new business school MBAs, instilled with a twin philosophy - automation was the wave of the future, and one could apply the new thinking of the 1980s to every business endeavor in order to transform these into hyper-efficient super businesses, including agriculture. "Archaic" farms that had established an understanding of animal husbandry over centuries were quickly put out of business and bought out by new "factory farms" that used a combination of technology, mass-injections of antibiotics, close-confinement of the "stock" and waste disposal being passed to the community.

The last issue eventually caused enough of a reaction that many of the now very wealthy agribusiness concerns realized that setting up factory farms in Mexico, which had far laxer environmental laws, lower labor costs and generally a less empowered populace, might actually prove more profitable (just as such farms had tended to relocate in states that had lower taxes, environmental restrictions and wages originally).

In the end, this strategy, while increasing the overall production of beef, pigs and chickens dramatically, also caused the price of these meats to drop fairly dramatically, further eroding the ability of other farms to compete and driving them out of business. Meanwhile, south of the Rio Grande, these elongated factory farms proved the ideal breeding ground for increasingly antibiotic strains of viruses. It was only a matter of time before such a strain would jump to humans (indeed, it's likely that Mexican workers at these plants were also virus laboratories, providing many more opportunities for animal to human transmission), and from there, additional vectors took it into the general population - other kids playing with the infected kids bringing home the virus usually without knowing they had it.

The epidemiology of viruses is well known, yet advanced knowledge of medicine isn't going to help when you have viral factories that speed up the evolution of viruses a thousand fold. Even if this particular virus proves not to be especially virulent, the next one or the one after that may well be. Perhaps it is time for us to start questioning whether the factory farms are in fact yet another artifact of the "greed is good" mentality that's proving to be so destructive to the rest of society. Beyond the ethical dilemmas of keeping animals in such conditions, these factory farms are increasingly proving to be businesses that do harm than good, and as such at a minimum need to be rethought in light of that, and perhaps even need to be abolished (not just moved to places where people can't protest them).

Chrysler, Hedge Funds and Contracts

President Obama is beginning to look less like Franklin Delano Roosevelt, and a lot like his distance cousin Teddy. After several months of trying to come up with a viable solution for preserving Chrysler, yesterday the ailing car company went into formal bankruptcy, which means that the auto unions and the government at this stage essentially now own the company.

For the last week, Obama has been working with all of the major parties - automaker Fiat, unions, banks, hedge funds and similar investors and lien holders on the company, to try to stave off bankruptcy, while trying to keep from adding even more federal loans to the beleaguered company. In the end, while most parties agreed, the major hedge funds balked, demanding preferential treatment in terms of payback and seeking to get 2-3 times as much return on their investments as every other players. Obama finally lost his patience, ordered the company into bankruptcy, and effectively hit the reset button, wiping out several billions of dollars of equity outstanding as part of the process.

No doubt the financial industry and its captive press will scream bloody murder here, but the events of the last week represent the emergence of a new, big-money hostile political environment that will likely only strengthen from here.

In financial circles, one of the most sacrosanct documents is the contract. Filled with obscure legalize, most contracts are dense, deliberately obscure, and are often designed to seek the maximum possible advantage of one side over the other. Contractual obligations have played a major part in the most recent financial crisis, especially when such contractual obligations have overwhelmingly benefited the financial industry. A prime example of this was the defense given by investment banks that, even while being funded to the tune of hundreds of billions of dollars by the US government, paid out lavish bonuses to superstar investment bankers, analysts and C-level officers - they were contractually obligated to pay these bonuses, and as such couldn't go back on them.

Contracts are important. However, the problem with contracts is that while they may in fact describe the contractual obligations between two parties, there is *always* a third party involved. Call it the public good, call it government, call it society, but in all cases it should be seen as the interest that the rest of society has in insuring the peace and stability of that society. One of the underlying concepts that has taken place over the last forty years has been the rise of the doctrine of private business - that so long as companies do not in fact engage in specifically illegal activity by the letter of the law, government has no role in the contractual process - even if the companies engage in activity that violates the spirit of the law. Grover Norquist's famous quip about wanting to see government so small that it can be drowned in a bathtub is perhaps the most pithy encapsulations of this philosophy.

Tim Geithner, a former Federal Reserve regional governor, and Ben Bernanke, the current Federal Reserve chairman, have been steeped in this zeitgeist for so long that it is central to their world view. Barack Obama, on the other hand, has seen what happens when the rule of contract exceeds the rule of law, and as he becomes more comfortable with his own authority, is also beginning to exercise what may very well become known as the Obama Doctrine - that contracts that harm the public good even while being within the letter of the law can be abrogated by the third party in those discussions - the government, keeper of the public good.

There are many in the investment community (and in political circles) who are fearful that this approach will cause investors to not want to reinvest in the banks, for fear of their investments essentially being annulled. This has always been powerful weapon to wield against those who would seek to change the status quo; however, it is an argument increasingly without teeth. Those who invest should understand implicitly that no investment is guaranteed to be without risk, regardless of whether that investment is a few shares of penny stock or a sizable investment in a car company ... or a bank. The owners of a business who fail to press the people who manage that business to be more responsible, more innovative, more willing to respond to changes in demand, and more ethically responsible should hardly be upset when those companies fail.

Indeed, this is perhaps one of the fundamental problems that this society faces: there is an implicit assumption that one can make a business grow and thrive simply by pouring money into it, especially at the senior management levels. In essence, money is being used as a way to get reward - dividends - without otherwise having to do any work. It also has become a way of dodging the responsibility of managing that company well; rather than planning for changing environments, trying to produce better products and services, most senior managers have become adept at manipulating the markets instead to increase dividend yields for their owners.

It's increasingly obvious that sweat equity, which has long been a very secondary aspect of business, is once again coming into its own. To me Obama has just turned Chrysler into an object lesson, one that banks and financial institutions in general should pay a great deal to. It looks like the silent partner is beginning to speak up, and what he's saying is going to completely reshape the way that America does business.

Future Proof: The Disaggregation of Business

91. Our allegiance is to ourselves—our friends, our new allies and acquaintances, even our sparring partners. Companies that have no part in this world, also have no future.


Cluetrain Manifesto




The following blog is written in support of Cluetrain Plus Ten, a celebration of the 10th Anniversary of the Cluetrain Manifesto.

The news today in the papers was rather stunning - the United Auto Workers union was buying part of GM and Chrysler. General Motors, once the largest and most powerful car companies in the world, is being sold to its workers because the company became too fixated upon the business of making money and not fixated enough upon the business of making cars. Presumably, those workers, who still are in the business of making cars, may actually understand where their priorities really are.

This process is going on everywhere. The newspaper publishing industry is disintegrating, not because there's not enough news, but because there's too much of it - millions upon millions of "citizen journalists" who are reshaping the fabric of news, armed with inexpensive camcorders and laptops and iPods. Big box stores are being replaced by hundreds of thousands of specialized retailers, operating over the Internet or with minimal brick and mortar presences. Office parks are emptying out, as the workers of the companies that used to be in them work from homes and coffeeshops and conferences a thousand miles away. The giant businesses loom over all of this like hulking dinosaurs, scary until you realize that most are dying, and that what you are seeing are the skeletal ribs of decaying corporate carcasses.

Recessions come and go (though most in the last eighty years have not been quite so bad as the current one) and in most of them, older, less efficient businesses often disappear along the wayside, beat out by newer, flashier, more nimble opponents. Yet it is likely that this time around, we're going to see a mass extinction event, because the very nature of business itself is changing.

The large-is-better business model evolved through much of the late 19th and 20th centuries because it was the most efficient mode for communication channels - a hierarchical business model is a network with a bias towards centralized information dissemination and execution. Direction was passed from a leader to his subleaders, who would then break down the tasks pertinent to their domain and pass it down to their respective subleaders, until eventually you had specific tasks assigned to individuals at the leaf ends of the network. It also had the advantage of working well in a geographically centralized manner - each subtree usually represented a geographic aggregation of some sort.

Additionally, such command and control structures had the additional benefit of pushing information back up through a series of management filters - if it was not perceived as being important enough to engage the time of a given lieutenant, it wouldn't pass beyond that lieutenant to his superiors. This meant that, in theory, only the most important information would make it up to the top, and the role of the centralized decision maker became at least somewhat rationale.

In practice, however, such filters also served to isolate these same decision makers from interacting with the outside world. Hierarchies by their very nature tend to promote privilege - the higher up the chain you are, the more you are rewarded, and in practice the less you are likely to interact with the people that actually use your business products or services - instead, you interact with your counterparts at other businesses or organizations. And as a consequence, hierarchies can become forts, with the leaders of the hierarchy only vaguely aware of (and usually far less mindful of) the actual work done collectively by the others in the organization that in turn pay his paycheck and bonuses - or of the people who pay for that work.

The hierarchical model is well suited for broadcast - information from a centralized source gets disseminated through the hierarchy, while the hierarchy in turn acts as a filter to analyze and consequently respond to this data in aggregate. This has the side-effect, however, of dehumanizing the response channel - you are less interested in whether Jane Doe was motivated by your messaging (advertising or otherwise), but far more interested in the fact that a 32 year old Caucasian single woman who makes $64,000 a year, lives in a $550,000 house and is a vegetarian purchased your product. Jane Doe is a person, the latter is a demographic profile that can be used to see whether Product X is successful in getting Jane Doe to fork over her hard-earned money.

The Internet establishes an alternative set of communication channels that are very different from the hierarchical model. In effect, it makes for ad hoc, collaborative, overlapping interest groups. It makes aggregate collectivist behavior far easier to accomplish, and it means that information can spread very quickly, as it passes from interest group to interest group through common members.

Most companies originally thought that such interest groups were a good thing - after all, most of marketing involves targeting your message toward a given interest group while trying to reduce the exposure of the message outside of that interest group (as the non-interested groups produces far fewer responses - it's not as cost effective to advertise to people who either lack the means or the desire to purchase your goods or services). Company X could market its new organic power bar to such interest groups, and expect a much higher conversion ... which in fact did happen.

What these companies were not expecting was that the members of this interest group would also pass negative information about the products (and the company) to one another ... and that they would talk back. This wasn't supposed to happen. If the power bar didn't taste very good, this was information that would spread just as quickly, and it was beyond the control of the company to fix. If the organic components really weren't, if the green message on the wrapper was at odds with the fact that bar was produced in a factory in China under less than ideal conditions, if the CFO was involved in an affair with the CEO's wife, all of this information would get passed on ... and the company had no way of controlling this back-channel communication.

Corporate communication is very impersonal - its intent is not necessarily to inform, but rather to protect the hierarchy - to promote the successes, to spin damaging news, to obfuscate the communication access to the primary decision makers and in general to reduce potentially embarrassing contacts between the decision makers and the outside world. The problem of course is that as the dialog channels between people improved, the cold, mechanistic nature of corporate speak also became far more obvious - and more sinister. People react negatively when they realize that communications are one-sided - that while there may be a semblance of human communication going on, there's actually no one on the other side that is in a position to actually do anything about it ... it's a waste of time.

Beyond this, corporations are made up of people, and when those people feel that they have been abused by the company, they now have at their tools powerful ways of disrupting those corporations. When people are laid off in a poor and demeaning way, when they are customers who have been "shafted", they will lose whatever loyalty they may have had to the company in question - and will become increasingly shy about giving loyalty to any corporation. They will develop ideas and tools outside of the context of companies - something especially significant because it is often those very ideas and tools that the company would otherwise turn into products and sell themselves. They will encourage others to boycott companies and suggest alternatives that will reduce sales for the company in question.

In one scenario I saw recently, a disgruntled former customer of a cable company established a website and devoted himself to convincing others to take their business elsewhere. In the end that one customer probably cost the company $1.5 million dollars in revenue, all over a cap on services that might have cost the company perhaps $30. Such anti-customers really didn't make much of a difference pre-Internet - the company could have acted with impunity because the real ability of that customer to affect the company was limited. Today, a single Twitter from the right person (who might either be the anti-customer or sympathetic to the anti-customer) could have hugely negative consequences for a company.

The real difference between a company and an interest group (a social community) is surprisingly small - usually an agreement for revenue sharing. This means that whereas fifty years ago it may have taken several thousand people to establish and run a business of any complexity, today you can get by with perhaps fifteen or twenty - which in turn mean that such companies need a much lower threshold of net revenue to be viable concerns. This is increasingly as true in capital intensive sectors as it is in information services. Componentization and modularization of parts in various sectors mean that you can construct and customize even durable goods at only a slightly higher margin than a much larger factory, and because you don't have the significant overhead associated with the larger factory, the marginal costs even out.

This means that, even as dinosaurs like GM thrash about in their death throes, there are dozens of smaller companies making specialty cars that are far more responsive to new technology and market demands, at a small fraction of the overall costs that GM needs to develop a given car line.

The upshot of this is that we are in for a long period of business disaggregation - where huge conglomerates spin off companies to sink and swim, where small, ephemeral companies navigate more effectively than large ones, where the distinction between consumer and producer becomes blurred to irrelevance. People won't be any less loyal, but they'll be loyal to those "projects" that they themselves have a controlling interest in. Brand names are only significant as ways of identifying those prosumers who are most adept at navigating this world, and are increasingly tied into the "personal brand" - "I trust Jane Doe because I can communicate with her, her ventures generally succeed, and she knows how to involve others in her ideas."

It should be an interesting decade.

Where has all the money gone

I entered into an interesting twitter exchange recently, to whit:

BrendanWenzel: A lot of people have "lost" money, but who is gaining it all? Wealth is never destroyed, but transfered. Who is it being transfered to?
kurt_cagle:Actually, in this case, "wealth" is just being destroyed, because assets are being repriced downward.
kurt_cagle:Most real wealth was made 2004-2006 by people in top 1%; we're just now discovering the fact that we've been robbed.
BrendanWenzel: So you are saying that these worthless assets never had value and were just a tool to steal wealth?
kurt_cagle: ... a tool to steal wealth? Um ... yup, pretty much. Did any investment banker really produce $30 million worth of value? No.


The numbers vary - from $2 trillion dollars to more than $40 trillion dollars depending upon how measure it, but in any case, a lot of "wealth" has seemingly gone up in smoke in the last year. Retirement, pension and college funds have been cut by 50% or more, municipal bonds have turned to dust, treasuries at the local, state and national level are bare. The world has, seemingly overnight, gone from being hyperfrenetic with activity to being, well broke ... and broken.

The question that Brendan brought up is a sensible one - where did all that money go? Is there someone out there who's now sitting on a pile of everyone else's money? No ... and yes.

People, including bankers who should know better, tend to look upon money as being, well, solid. You work every day, you get a paycheck for your efforts that represents a contract with your employer. That contract is usually slanted toward the employer - you provide the labor, and at the end of two weeks or one month or some other milestone, the employer gives you a piece of paper transferring a certain amount of value from the companies earnings to you. You take this to the bank, the bank deposits it, and from there you can "spend" this value.

Suppose, however, that the company has not made this money in earnings yet. Instead, they went to a bank and said "give us a line of credit, here is our plan to make value in the future". The bank evaluates the plan and the individuals involved, and if it feels like the plan will return a reasonable amount of earnings within a reasonable time, it will give the company that line of credit - a form of a loan, along with a fee to be added in order to compensate the bank for the risk that the company won't in fact make these earnings over the stated time.

This means that the money that you are making is not based upon existing value, but upon future value production. In essence, the company is in turn taking a risk that you will produce, though it is usually a pretty safe one. If you don't, then you will no longer receive that compensation, and someone else will be hired.

Yet, say after a couple of years, the company is not making enough money - the guess that was made concerning the profitability of the venture was off. The company's already sunk money into infrastructure, into salaries for the people, into energy costs, and into intangibles - marketing efforts. The company can go back to the bank and ask for an additional loan, but the bank at some point needs to determine whether the ongoing effort will ever prove profitable - otherwise, it is simply throwing bad money after good. The bank decides that "no, we're not going to give you the loan" and assuming the company doesn't find other investors (typically with more stringent requirements because of increased risk) it will close its doors, and everyone will be out of a job.

Now in this particular venture, you may have made money - though much of that money went into paying off necessities - housing, transportation, energy, food, information access and so forth - so you may have actually just broken even or even fallen behind. However, when the company fails, it can't turn around and ask for that money back. It's been spent. The money that the bank has also loaned has been lost - the loan becomes non-performing, because it no longer generates revenue, and the bank also took a loss.

If the bank charges fees on the establishment of the loan, these fees are things that can be assessed early - at the time the loan is made. At some point, the bank manager might realize that taking the fees are less risky than waiting for the loan to mature. They sell the loan as a "security". Now, this security is still potentially valuable, because it represents a steady stream of income in interest, and an investor can buy the security as a long term performing vehicle - so long as the person or company who took out the loan can continue to make payments.

Now the bank, at this point, has been lobbying the government to let them sell these securities, and a particularly business friendly administration gives the go-ahead. All of a sudden, a bank can make a loan, pocket the fees for that origination, then sell the loan as a security taking additional fees. What this means is that the bank no longer has any real incentive to insure that the person or company taking the loan can actually pay it back, because by the time it becomes an issue, it will be someone else's problem. The bank has essentially siphoned off a fairly significant amount of money in the transaction without actually creating significant value.

What this means is that they will be encouraged to make many more loans, because there is no moral hazard if a loan goes bad. If the loan is a mortgage or a lease, the bank may also encourage the ones acting as brokers in the sales of these properties to try to get top dollar, because it increases the fees that they can take off the transaction. The mortgage broker sees no problems in that - he too gets a percentage off the top, so the more valuable the property, the more he makes. The county assessors that determine the baseline price will try to increase the property price as well, because that increases revenues in the tax coffer, and if tax revenues go up, well, its good for the city or county.

Now, normally, this breaks down if interest rates are high - because the person who actually commits to the purchase has to pay the interest on top of the agreed upon price and fees. However, if interest rates are kept generationally low, then even though the house may cost more, the individual payments may be smaller, especially if they can be spread out over a longer period of time. Then of course, you also have speculators who buy up properties with no intention of paying the long term price - they simply become brokers themselves, selling to someone else at a higher price in three or six month, because real estate prices always go up. The buyer may also simply not have the financial resources to purchase the property in the first place under normal circumstances, but with a bit of "creative accounting" they are encouraged to buy.

Now this chain goes all the way up and down - ratings agencies are encouraged to rate securities higher than they should be, corporate raiders use risky securities (junk bonds) to effectively purchase companies, replacing actual earnings with debits against future earnings. Stock brokers use this debt to leverage purchase of stock with very little actual money committed, and so forth.

All of this activity involves replacing existing earnings - real work - with promised earnings - credit, and because there is comparatively little oversite, the actual obligation on the part of the wage earners and company earnings climbs and climbs and climbs, until you get a situation where a person would have to work continuously, 24 hours a day, for century or more to produce the real work that's been obligated on her, usually without her direct consent. That's clearly unfeasible, and the system ultimately collapses as each company or person fails.

Debts that the banks and shadow banks hold have to be written off, rather than being treated like assets. This reduces the amount of money that the banks can commit to writing loans, and also instills a sense of hyperconservatism in extending new loans, because they can no longer service the old ones. This causes credit availability to collapse, which means that companies can no longer pay their workers (as the paychecks were paid from the loan which was to be repaid by earnings).

As workers lose their jobs, they cut back on their spending, which causes other companies to go out of business, which only exacerbates the situation. Companies are forced to lower their prices in order to move any product, and a deflationary spiral sets in. Everything loses value as the availability of money dries up and markets plummet.

Eventually, demand for goods reasserts itself, as things wear out, as population grows, or as people become less fearful about the future. However, the damage has been done - the negotatiated value of things have dropped dramatically, whether that's the cost of a new car or the cost of a stock, and people who purchased the stocks thinking that it was a safe investment now discover that they're holding worthless paper - the company has either gone out of business or, if it survived, now has a much smaller cash position and it will take time for it to get back to its earning potential, significantly reducing the long term return on investment.

So, given that, chances are pretty good that there's not one person out there who is now sitting on everyone else's money. The money never really existed, save in potentia. What disappeared was the expected potential of that future labor.

However, that doesn't mean there aren't scoundrels. Companies who buy and sell these securities have profited immensely by the transaction fees and bonuses, which also came from future earnings. It would be much like you being paid for the next thirty years of your wage earning time up front. If the business fails, it makes no difference to you - you've already been paid handsomely, and can turn around and spend that money any way you choose.

Yet that money has to come from earnings at some point, and it does. It comes from pension funds that fail, leaving people who have invested with nothing. It comes from reduced pay elsewhere in the industry, as credit has been compromised. It comes from tax revenues, which decline dramatically in a recession because people don't have the wherewithall to pay. In other words, the thirty million dollar "bonus" that the hedge fund manager or bank CEO takes home comes directly or indirectly from the earnings of others, who now have to work longer just to get back to where they are.

So, yes, it was a ponzi scheme, a bubble with a skim, caused by the greed of "financial professionals" and political officials, aided by tax cuts that were highly favorable to these same people, and a war that made it possible to hide similar fraud elsewhere. It is still going on, and it has bankrupted this country for years to come.Where has all the money gone

End of US Dollar as Reserve Currency

I'm going to get off the meta-trends that I've been following throughout the week and get down to a list of things that I've been tracking myself. I'll probably make this a regular feature - there's so much going on right now that I doubt serious whether any one article would cover more than a small part of it, and trends can be disrupted (or just peter out) without actually amounting to any thing. I welcome feedback here on what you've been watching as well, as I think the best way you can become informed about the world is to get a different perspective from the one you currently have.

So, without further ado:

The US Dollar is losing its status as the world's reserve currency.

First, a quick definition here: in the 1940s, a decision was made on the part of the various world powers to establish the US Dollar as the reserve currency for the purchase of oil. What that meant was that if you wanted to buy a barrel of crude on the spot market, you could only purchase it in dollars, and if you wanted to sell that barrel, you similarly had to accept that money in dollars. This was essentially one of the key requirements for receiving aid via the Marshall Plan, something that was sorely needed in Europe at the time.

In essence, what this meant in practice was that a country that needed oil (and all countries need oil) had to maintain a certain amount of financial reserves in dollars. To get them, it either had to buy US goods, or it had to give the US a certain amount of gold at a fixed rate to buy these dollars, usually in the form of treasury bonds. However, on the flip side, a country could also, upon demand, sell their dollars back to the US for gold.

This had the immediate impact of swelling the US treasury, which was in fact one of the things that helped the country pull out of the depression of the 1930s. It also introduced a pernicious inflation through much of the 1950s in the US, as the US printed more and more such dollars in order to meet global demand. However, it also had a darker side effect for the banking industry - it kept their ability to leverage down to a very definite minimum, meaning they could originate very few loans - and it increased the demand for gold globally as other countries began to recover.

Presidents Eisenhower, Kennedy and Johnson were all forced to revalue the currency by increasing the price of gold, but diminishing actual reserves finally forced Nixon, in 1973, to "close the gold window" and declare that the US would no longer honor the gold cap but would let the dollar float. This (along with the effects that it had upon the oil producing states) ended up forcing the energy crisis of 1974-77 and the subsequent period of hyperinflation. It was also one factor leading to the creation of the Euro.

Inflation was ultimately tamed by Paul Volcker, the Federal Reserve Chair, who raised the prime lending rate dramatically in order to attract foreign investment, a strategy, which, while leading to a fairly severe recession in the short term, managed to accomplish the task, restoring confidence in the American markets and laying the groundwork for much of the long bull market that followed thereafter.

However, without some form of backing, the degree of confidence in the dollar had largely become a function of the degree of trust in the US economy. This trust began to be eroded in the wake of the savings and loan scandals of the late 1980s and of the implosion of specific hedge funds after the economies of a number of Southeast Asian countries collapsed in the late 1990s.

However, the last decade has seen the uncertainty turn into outright distrust as the Federal Reserve seemed to be deliberately manufacturing bubble after bubble in an effort to sustain an increasingly shaky financial system. The housing bubble in particular in particular had the effect of significantly raising questions about the strength of the dollar, and by mid 2007, the value of the dollar dropped fairly precipitously relative to other currencies. The Canadian Loonie, for instance, at one point briefly topped US1.10 = CAN$1.00, after started at about US$0.74 in 2002.

This was also reflected in the price of oil at the time. While oil prices were up fairly dramatically in Europe, they were up far more (percentage wise) in the United States (the US has a very low gas tax rate while Canada and most European countries have a much higher rate, meaning that absolute measures were actually much closer). The top of the oil speculation market came in the summer of 2007, though by the fall, the first inklings of problems within the mortgage sector were making themselves felt.

This isn't the place to go over the whole financial collapse between 2007 and 2009, that story is now familiar to most people. However, in its wake, a couple of very interesting things have happened. The first has been a massive flight to treasuries (the US Dollar) which has caused the unwinding of most of the currency advances as investors have moved out of equities and properties into a temporary store. For foreign investors, the assumption has likely been that it was wise to move out of falling markets into treasuries rather than repatriating those funds. However, this has also had the effect of creating a bubble in Treasuries.

However, a second factor that's come into play has been that China has been purchasing Treasuries in order to keep their currency, the Renminbi pegged to the US dollar in order to remain competitive in providing goods and services. This has meant that they have ended up purchasing roughly $1.5 trillion dollars in treasure currencies as of 2009 - money that in fact has largely been used (dubiously) for the Iraq war and for financing the mortgage bubble in the first place.

One way to think of a treasury note is to envision it as a stock warrant in USA, Inc. The warrant pays a dividend (interest) to the holder of that note. When a company goes public, it sells shares in the company, yet the company itself is worth only so much money (essentially some percentage of its potential lifetime earnings). This means that as the number of warrants issued rises, the individual return on those warrants drop - the warrants become worth less. At $1.5 trillion dollars, China holds roughly 10% of the total GDP of the US. If China was dump these holdings on the world market, the value of the dollar would collapse overnight, resulting within six months of extraordinarily high inflation - high double or even triple digit inflation.

China won't do that, because it would not only make its investment worthless, it would also completely destroy the economy of its largest market. However, it has all but stopped its own purchases of treasuries, and in the last month (March/April 2009) has devised a strategy which will let it significantly reduce its own exposure to American financial activities. It has contacted the International Monetary Fund, and asked for the creation of an IMF bond invoking what are called Special Drawing Privileges that essentially would make it possible to set up direct currency exchanges with other countries - most notably Brazil, Russia and India, which, with China, make up what have become known as the BRIC economies. It has also created additional currency swap agreements with countries such as Argentina, a number of countries in Central Africa, and Indonesia. Significantly, all of these are oil or other resource producers or act as brokers for same.

Put simply, the goal of the Chinese is very much in accord with a number of oil producing companies in the Middle East - reducing the historical role of the US Dollar as the global reserve currency. Given the antagonism that the US has engendered over the last decade, there is far more support for such an action than there ever has been in the past, and even though it is likely that Obama's overtures will likely mend a few fences, the real damage has been done to the trust of the dollar. Sometime within the next 3-5 years, it is likely that a global "reserve currency" will arise, one that consists of a basket of floating currencies and exchange agreements rather than any single country's currency. The US will likely be a part of that, of course, but it will no longer be the world's bank (or the world's first consumer) - and that has a number of potential implications for the US.

One of the largest is that the US credit card will officially be maxed out. The US borrowing debt ceiling has been a convenient fiction for a long time - as the US gets close to it, an act of Congress raises the ceiling. This was done largely on the strength of expected sale of US Treasuries. The world now has more US Treasuries than it could use, but once a basket currency becomes the norm, there will be far more interest in purchasing other countries' debt instruments, which means that demand for US Treasuries will likely remain depressed for some time.

This means that the account deficits explode, and there becomes no way of even paying off the interest on the debt. This will result in the reduction of the rating of US bonds. One effect of this is that relatively soon, taxes will have to be raised, and fairly dramatically, in order to finance any new expenditures. The US will have to raise its own interest rates in order to attract more investors, at a time when the economy will just begin its recovery. Defense expenditures will have to be significantly reduced, social entitlements will have to be renegotiated, and the ability of the government to act will be increasingly hamstrung.

This will also mean that the Americans will have to save more. At the moment, much of the effort in the recovery is going towards getting Americans to spend more, to increase the velocity of money in the system, but its becoming increasingly obvious that this isn't working. Now the problem with saving is that while it is prudent at the individual level, it reduces the amount of money available to create businesses (in the short term) and reduces tax revenues that derive from the acceleration of money in the system at the macro-level. In other words, a savings-oriented mindset is anathema to a consumer economy ... at least for a while.

However, my own take is that we're not going to end up going back to a 30's style "great depression", nor do I necessarily see the stark future outlined by people like James Howard Kunstler. We're in the midst of a major structural change in society, one fed in great part by the profound changes in information infrastructure and additionally shaped by a growing awareness about the fragility of the underlying ecosystem. The consumer culture of the 20th century is failing, but that doesn't mean that all of a sudden we should all become socialists or communists or cogs in some oligarchical brave new world order.

What it does mean is that we're making it up as we go along. One of the reasons for the original Bretton Woods accords was to attempt to guarantee, as much as possible, the notion of full employment, at the expense of inflating the currency. The reality now is that full time employment, at least in the traditional sense, is breaking down. We need to make some hard decisions about what constitutes a valid standard of living - and what constitutes an excessive one. We need to come to terms with information and reputation as forms of currency, with virtual currencies, and with the degree to which currency reflects value.

All of these things (and many, many more) will ultimately need to be determined as the world adjusts to a new abstraction of currency, and the more that efforts are made to return to an unsustainable status quo, the longer it will take before a true recovery can take place - and the more turbulent society will become.

Future Proof: Tornadoes and Turbulence

When I was sixteen, I saw my first tornadoes. My family had decided to visit some friends in Cheyenne, Wyoming, so we loaded up the van and made the trip from Peoria Illinois, encountering unsettled weather conditions all during the two day trek. As we were unloading our bags, I noticed my father periodically staring up at the sky, which had taken on a peculiar green cast to it. A few minutes passed, the sky became darker and more ominous, then out of the maelstrom one, then two, until finally six twisters came snaking down from the clouds. For nearly half an hour we watched as the tornadoes ran along a couple of ridges, occasionally flattening one house then hoping over the next, until finally the storm lost enough strength and the twisters became wispier and more tentative until they finally faded altogether.

By the end, the twisters had collectively destroyed 200 homes, flipped numerous cars, caused significant damage to the governor's mansion, and threw a commuter plane through the door of an aircraft hangar. It was one of the most destructive tornadic storms to ever hit the state, and gave me a healthy appreciation for the power, majesty and mindless destructiveness of nature.

One of the most interesting facts about tornadoes is also one of the least appreciated. Tornadoes come from thunderstorms - as anyone who has seen the ominous green thunderheads can attest to, but they are not, in fact, part of the the powerful circulating cyclonic cell of hot moist air and cold dry air that make up most thunderstorms. Instead, they arise due to turbulence along the edges of this large rotating system. Thunderstorms can move quickly, and can rotate quickly. As they do so, they drag the air around them, but this drag is uneven ... and is influenced by such factors as the topography of the ground, the overall viscosity of the air and the formation of wind sheers and streams ahead of the storm - in other words, the environment external to the storm itself.

Tornadoes are directly related to the vortices you get when you run your hand through water in a trough or other constrained place - they are islands of temporary stability within an otherwise unstable environment. They also act to siphon off a lot of the potential energy within the storm itself, converting that energy into kinetic energy, which eventually dissipates as drag with the rest of the environment. Once the tornadoes release their energy, this also typically pushes the storm down a level of organization and energy to the point where it can no longer hold the water vapor that it is carrying, which then causes the heavy rains that usually follow such storms as the structure dissipates.

Every system, including systems of abstraction, requires energy of some sort to maintain it. Similarly, every system interacting with those things outside of that system create drag on the system, resulting in turbulence. Turbulence should be seen as the transfer of energy out of a system into the environment, and as such is very closely linked with thermodynamics. This holds as true of software and social structures as it does of physical systems, as long as you understand that in both cases what you are dealing with are systems of nested abstractions.

This doesn't mean that outside of every social structure there's a giant whirlpool or tornado waiting to happen. Rather, it's worth understanding that any system is made up of interacting parts that for the most part have achieved a fairly high degree of internal efficiency. One way of thinking about this is that the system has a certain momentum associated with it - energy and information moves through the system in such a way as to keep the system cohesive.

However, especially at the edges, this energy drags against the outside world, and in so doing, it creates pools of resistance, and counterveiling forces. Normally, such forces are comparatively small, and in many ways can actually contribute to the underlying cohesiveness of the primary system because they create a barrier of insulation against external stimulae or impulses - the turbulent counterflows absorb the attack, dissipating or at least blunting the impact upon the system. One way of thinking about this is that people may have a particular ambivalence about leader or political group, but they fear change from outside more than they do the status quo ("better the devil you know than the one you don't").

However, in the presence of other dynamic systems, sometimes the turbulence that emerges becomes large enough and cohesive enough to became stable in its own right, especially as one particularly stable "whirlpool" merges with another.

A good example of this can be seen in the rise of Open Source software. Microsoft in particular had managed to dominate the software sector by the mid-1980s, and with it the proprietary software model become the accepted mode of operation by the mid-1990s. However, Microsoft also ended up stirring both resentment among other development groups and concern among customers that were afraid of vendor lock-in.

This set up turbulence for Microsoft's "system". Any one piece of that turbulence - Linus Torvald's invention of Linux, the rise of Apache as an increasingly popular browser, the GNU GPL, Sun's releasing of the Star Office code as Open Office, and so forth, individually bled small amounts of energy from Microsoft, but nothing that seriously impeded its own growth. However, each piece of turbulence would interact with others, and after a while a new countervailing system emerged out of that turbulence. A tornado or whirlpool is a cohesive system that draws on the energy of the overall hypercell, and the larger or more powerful the tornado becomes, the more it bleeds off energy from the main cell. Open source soon began to bleed away the proprietary model that Microsoft most clearly embodied at the time, pulling in more developers, more investment, more potential users.

Up to a certain point, the energy entering into a system ends up as more turbulence and more quasi-stable neo-systems, as well as providing the necessary glue for smaller systems to merge into larger ones. However, there's a certain balance here - too much energy into an environment can prove disruptive overall as the turbulence makes it too difficult for new systems to maintain cohesiveness - the turbulence spawned subsystems are disrupted by their own turbulence (in essence, the market is boiling at that point). Too little energy, and you get systemic decay, where the least stable systems fall apart. Typically, transitions from one stage to another of abstraction involve energy exceeding or failing to reach a critical threshold for that system.

From the future analyst's standpoint, then one of the lessons to be learned is that when you look at what appears to be a stable system, look at where it is causing the most turbulence. At the moment, for instance, the whole of desktop computing is being challenged by the cloud, a universe of services that individually may not be a match for the corresponding desktop app, but that collectively are reshaping the programming paradigm dramatically. The traditional world of publishing is under assault from a myriad of social media applications that individually are not that threatening, but which together is forming a cohesive interactive system of its own that has traditional publishing on the ropes. In the near future, centralized power distribution is being challenged not by a single new power source but by a whole spectrum of technologies that each emerged in response to the problems that the existing grid failed to answer, and that collectively are creating a new system that is challenging most of the core assumptions about power distribution that have been considered "holy writ" since the 1920s.

In other words, when looking toward investing (whether time, money, career involvement and so often) look toward areas where countervailing technologies are emerging, and pay special attention to those that seem to develop easy synergies with other complementary technologies. In the energy sector, for instance, solar energy (photoVoltaics) including beamed microwave energy, geothermal pumps, intelligent energy routers, hypercapacitors, hybrid automobiles, maglev trains, recycled heat systems and wind farms together make up a cohesive system of technologies that are complementary to one another, and that collectively make up a self-reinforcing system. Individually, they won't replace the existing carbon-driven fuel system, but collectively, they may very well.

Future Proof: From Word of Mouth to the Open Book

We are a talkative species.

If you take a look at the bulk of inventions produced in the last 10,000 years, they fall into four broad swaths - better ways to move things (and ourselves), better ways to protect ourselves, better ways to feed ourselves, and better ways to communicate with one another. Communication with one another is such a strong imperative that one of the harshest punishments that we can inflict on people is to deprive them of that communication - to put them in solitary confinement, to exile them to the wilderness, to "ex-communicate" them. In many primitive cultures, should a person commit manslaughter or some similar crime and get caught, they became "dead" - not killed in retaliation, but made a non-person that others were not permitted to acknowledge or speak to.

Because of that importance, how we communicate is a very significant thread for the future analyst to watch. The predominant communication channels that a culture uses will dictate its organizational structure, more so than any other factor. In hunter-gatherer societies, communication (beyond one-to-one local communication) is typically done communally, within groups. For formal communications - when decisions need to be made, for instance, or in the recitation of (and addition to) a community's memory structure - the role of speaker was typically formalized - the speaker was the one who held a given totem, or given the floor. Communication range was also limited to the speed at which a man or woman could walk or run.

Additionally, these early cultures typically made use of a living long term memory, usually via an oral "song" that kept intact the important stories, historical figures, legends, and constraints of the group. One of the fascinating things that neurologists have recently discovered is that musical memories are stored in a very different way than spoken memories are in the brain, and that such memories are typically retained much longer and with better fidelity (in great part because these memories are actually retained in the hippocampus and cerebrum rather than the cerebellum). Kinesthetic memories similarly tend to be retained far better as well. This may be why most people within these cultures were taught oral history as a combination of chant and dance - the body actually "remembers" this information at a deeper level than it retains speech.

Nomadic cultures made an important discovery - horses not only made for good food, but if you could manage to sneak up on a horse, it was possible to actually ride it. At first such rides were probably just short enough to put a spear in it, but after a while some genius realized that if they could actually control horses, they could go far faster than they could on foot. Beyond the obvious advantages from a food hunting perspective, one additional advantage was that horse-borne messengers could communicate far more effectively with people at greater distances. This made it possible to coordinate actions, and was in fact one of the first instances of hierarchical military structures - a warlord with one force could communicate effectively with additional forces under his captains, who could in turn coordinate their forces with lieutenants.

Agrarian communities developed a somewhat more defensive structure, designed primarily to keep these same nomadic cultures out. but also because the communication requirements of agriculture are broader. Farming is a chancy business - you're forced into defending a turf of land - running away isn't really an option unless you were willing to starve - so you needed to have ways of coordinating the troops (again, a hierarchical structure). However, you also needed to manage inventories, to determine how much of a given crop you needed to replant as seed, to set prices on the grains and other goods, and, ultimately how much to tax people for the services in order to make all of this possible.

As discussed in an earlier column, what this amounted to was the process of shifting abstraction levels. This can be seen in mythology. Tribal mythologies are very animistic - every grove, brook, wind and cave had its attendant spirit, but for the most part, those spirits simply existed - you acknowledged their existence and occasionally bribed them in order to insure success in your ventures, but there was little in the way of hierarchy.

Most agrarian societies, on the other hand, very quickly established hierarchical models - supreme gods, and then secondary and tertiary gods - that reflected the growing power of centralization in human hierarchies. The warlord became the god incarnate, and power became concentrated in bureaucracies - priestly castes, military castes, merchant castes.

It's perhaps not thus surprising that writing only came about with this shift in complexity; tribal societies have no need for writing, but agrarian ones have a large number of such needs. The emergence of writing was a radical change in human society - first because it meant that humans didn't need to expend as much of their thought processes on rote memorization, and likely for the first time could start thinking about information in a way that wasn't tied specifically with a generational oral record.

Indeed, one interesting speculation about this is that "spoken" language may actually only have emerged about the time that writing began, and that most languages prior to that were likely sung rather than spoken. One possible indication of this is to look at those cultures in the last hundred years where there was no formal written language and compare what happens before and after they are exposed to writing. Typically, children from these cultures who are then exposed growing up with writing tend to have far worse rote verbal memorization capability, though they have far better analytic ability.

It's worth noting that reading and writing also cause a significant change in the communication structures of a society. Writing is an asynchronous operation - information placed in writing does not need the speaker of the information - you could write "letters" that allowed (slow) communication between people who were not geographically close.

Additionally, and more subtly, it becomes possible to scan written information in a way that's simply not possible with speech. This in turn let to breaking blocks of narrative into smaller, more digestable portions, a process that almost invariably occurs as new media emerge. The earliest written narratives were literally epic in scale - they represented a story that might be told over several hours in an evening, because they were almost certainly based upon earlier oral stories. However, as writing became more sophisticated, it began to develop a recursive hierarchical structure of its own as people began to master the nuances of committing symbolic representations of meaning to a physical medium.

Most early literate cultures developed a "bible" of some sort, a written work usually attributed to divine provenance, that encoded the mythos (the legends, accepted history, genealogies and so forth) and ethos (the ethical rules or laws of that people that described what was acceptable and unacceptable within the society) of that culture. The Hebrew Torah, the Islamic Talmud, the Christian Old and New Testaments, the Hindu Mahabharita and Ramayana, all of these "books" emerged in cultures that had established active literary traditions, and more had them long enough to accumulate a body of related "subordinate books". Indeed, by some estimates the "Bible" alone represents the political and cultural selection of between 80 and 110 different books, depending upon the particular sub-branch of Christianity or Judaism, with another few dozen books that were in one version or another over the years but have been dropped.

Cultures of the Book illustrate how powerful the advent of writing was. With a single cultural canon, mores and ethics can be established independent of geography. For instance, the Old Testament represents the ethos and history of a desert-based culture. Desert cultures are typified by a nomadic existence, a male-dominated society where women were usually treated as chattel, a strong sense of hierarchy, a low premium placed on the value of human life, and a very competitive warrior ethos. Even the New Testament, which may have been influenced by the Dionysian Mysteries so prevalent in Asia Minor as the time, is still filtered through this desert culture filter.

Yet because of the "authority" that the book has compared to more transient oral traditions, Christianity was carried all the way to the wilds of Northern Europe, England, Scotland, Ireland and Wales, which previously had a forest culture structure - far more gender equality and egalitarianism, strong oral traditions but only very crude literary ones, a far higher sanctity of life, a much stronger clan or family basis, and so on. It's perhaps not surprising that so many of the heresies that the Catholic Church eventually had to stamp out come mostly from the north as a consequence, as there was a certain cultural schizophrenia that occurred as a fairly alien cultural outlook became overlaid upon a very different foundation.

The migration of book production to the north also brought about the next major evolution in communication - the shift from papyrus based scrolls to vellum-based books. Papyrus came from desert reeds, and thus, over time, became increasingly brittle - and usually could support only a minimal amount of pressure before it crumbled - thus papyrus scrolled around two rods, casette-tape style, was the most effective way to store it.

Vellum, on the other hand, was made from lambskin, which was far more plentiful in the north. Because of the curing process, vellum was remarkably resistent to fading or crumbling (indeed, many vellum books survive to the present day in very good condition. However, lambskin was, by its very nature, much more limited in dimension, which eventually led to using vellum leaves that were originally stacked together, then later sewn together, into a new arranged where the content was displayed in pages.

The introduction of new communication channels are quite frequently accompanied by significant upheavals in cultures, especially if the new channels are markedly superior to the old. The first Western printed work was the Gutenberg bible, produced in 1439. Printing was quickly picked up and improved by Italian Aldus Manutius in the mid 1450s, and in England by William Caxton and others in the 1470s.

One of Caxton's most significant innovation was actually a cost-saving measure - rather than a printer using a single folio page for a book (which resulted in very large books), he subdivided the folio page into quartos (quarters), and figured out how to orient the page so that such quartos could be more efficiently printed and bound. This essentially meant that you could produce four times as many "books" with the same effort, and it also represented the shift to the first truly portable book since the innovation of the scroll, which had the effect of lighting a fire under the nascent publishing industry.

This technology change was likely one of the major factors of the Reformation and the rise of Protestantism. Prior to this period, most bibles were owned only by churches or the very wealthy/powerful. With Caxton books (and a subsequent shift away from expensive vellum to cheaper cloth and wood pulp pages), bibles (and many other books) now moved into the realm of being affordable (albeit still expensive) for the average middle-class burgher or shop-keeper.

Martin Luther's innovation (and its worthwhile understanding that it was an innovation) was to translate the contents of these bibles from Ecclesiastical Latin into contemporary German. This has the immediate effect of letting ordinary people understand and interpret what had been, up until then, what had only been disseminated by priests and clergy. In modern parlance, Luther disintermediated the priests. This had the fairly immediate effect of subverting the legitimacy of the Catholic Church (especially in the North), and the rise of a new class of priesthood who adapted to the new technologies by shifting from the role of arbiters to the role of guides and interpreters.

Of course, the established order did not go quietly into that good night - it seldom does. Once a given communication channel stabilizes, a social order will tend to evolve around that communication channel, to become invested in it. This is especially true in those situations where the communication system is hierarchical and it meshes with a hierarchical mindset. Once you introduce a technology that had previously been available only to the gate keepers to everyone else - whether affordable books in the language that people spoke or low-cost publishing systems that bypass the established news providers, then the value of the existing services plummet, while those that master the production within the new media are able to establish new value measures.

What's more, invariably the first uses of a new medium are to recreate the dominant pattern of the old. The vast majority of all of the new works produced during the mid-15th century were bibles. Of course, this undermined the scriptora throughout Europe - a single bible might take a team of monks the better part of a decade to create, whereas while it may take only a few months to set and print a bible using a press, and once one bible was printed, dozens more could be printed until the first wooden type blocks wore out. Once people began experimenting with molten lead dies, this meant that hundreds of such books could be created.

Yet the real changes - the truly political ones, came as printers began to realize that while the demand for Latin bibles was high, it wasn't infinite, and eventually they began to turn to examine other uses. The translation of bibles into contemporary languages (the vulgate, or common, versions) became an act of defiance of the existing religious establishment - as well as a means of controlling the message by local kings and rulers trying to break the stranglehold that churches had held on their lands for years.

It also meant that other books were soon also published. Histories, books of poetry, philosophical tracts, and similar works emerged around this time, as the medium made such works economical to produce, and in so doing laid the groundwork for the birth of most forms of contemporary literature. In many ways, publishing in the period from 1470 to about the 1530 or so was as dynamic a period of time for innovation as the Internet would be five hundred years later. By the end of this period, the Reformation would be sweeping Northern Europe just as the Renaissance was sweeping southern Europe. The church, seemingly dominant and invincible in 1450, would be torn by strife and dissension as a thousand year old empire disintegrated.

There are a number of lessons to be learned here. Changing communication channels can have huge impacts upon society, something that we're only just really beginning to face today. It is a mistake to see the world of 2020 as being much like today, because the very structures that formed the foundation of the last couple of centuries is now being torn asunder in very much the same way. More on this in the second part of this blog post.

Future Proof:Catching Black Swans

Recently an earthquake hit the town of L'Aquila in Italy, collapsing a number of buildings, killing more than a hundred people and leaving several thousands stranded. Last year Hurricane Ike slammed into Galveston, Texas, leaving many parts of the coastal community submerged. In the news of late have been the "one-time write-offs" that banks are taking because of the extraordinary credit crisis.


The one thing that all three of these things have in common is that they appear to be "Black Swans", a term that economist and mathematician Nassim Taleb used to describe events that seem wildly improbable, yet nonetheless do occasionally happen. The term derives from the saying that, as all swans are white, the term "as rare as a black swan" means something that is so improbable that there is no way that it could happen.. Of course, eventually a species of black swan was eventually discovered in Australia, which was Taleb's point - beware of assuming that simply because events are rare, they will never happen - and when they do happen, they tend to cluster.

To understand why black swans are not as rare as one may think (and to anticipate their appearance), it's worth going back to the abstraction model proposed a couple of posts ago. Almost all complex systems exist at multiple levels of abstraction. One way of thinking about this is to envision the human body as being made up of subsystems - organs - each of which in turn are made up of tissues. which are in turn made up cells. A person in peak health has all of her subsystems (organs) operating more or less optimally.

However, one day a carcinogenic agent enters the body - tobacco smoke, asbestos, environmental steroids, the list is rather depressingly long. A cell in her breast mutates in the presence of the carcinogen, losing the ability to "shut off and die", which most cells do after they've reached a point where their internal mechanisms are no longer sufficient to do the job efficiently. The cancer spreads, every time the cell would undergo normal meiosis. If the woman is lucky, a routine examination would find this cancer when it is fairly small, at which point the best solution is to remove the cancerous tissue.

If the woman is unlucky, the cancer would grow until it found a conduit (typically a lymph node or a blood vessel). Cancer cells that broke free from the mass would be transported by the conduit until it ends up somewhere else in the body, at this point it would attach itself to other tissue and continue spreading. The cancer cells crowd out other cells, choking off access to blood vessels or waste channels, and other cells either become cancerous in turn or they become necrotic - dying but not being removed by the body's defenses. The woman becomes tired more easily as energy that would normally be going to maintenance of the body is increasingly co-opted by the cancer cells. Tissue becomes tender and inflamed, and pain caused by cancer cells crowding in on nerve endings becomes more endemic.

If the cancer spreads to the lungs, then breathing becomes more difficult and becomes an emphysema. If it spreads into the lymph system then the woman has more trouble fighting off infections, and becomes sicker more often. If it spreads into the bone then normal stresses may cause the bone to snap.

The broken bone is a black swan event - it seems unlikely that a bone would normally break under typical stress actions, but in point of fact, this isn't a typical stress action. The system has been compromised, and the cancer has spread out in a spider-web like fashion through much of the body. An aggressive fight against the cancer by irradiating it or using poisons (chemotheraphy) may be able to remove the tissue, but typically it does so by further weakening the ability of the body to function.

The body does not die because of the cancer. Instead, the cancer causes each system in turn to become less efficient, and ultimately to fail because it can't get the energy necessary to continue. Once one organ fails, it increases the likelihood that other systems that are dependent upon that organ will also fail. The unfortunate woman dies of system failure.

This rather detailed and morbid description still serves as a metaphor for other systems. Complex systems are made up of simpler ones, which are made up of simpler ones still. Corruption usually occurs fairly far down in a given system, but most complex systems are generally fairly effective at catching and eliminating the obvious points of corruption (or worst case scenario, sequestering them off in isolation) . Corruption here simply means a subversion of the normal functions of that particular abstraction - an bridge inspector signing off an inspection report with only a peremptory check of the bridge, an employee stealing supplies from the supply cabinets, a student cheating on an exam or a businessman cheating on his taxes, a radical publishing seditious tracts, to name a few of the many, many examples.

Physical system analogs would be areas of snowfall on a mountain that gets more sun than normal but is also supporting other areas of snowfall, a particularly warm, dry, dust-laden wind coming off the Sahara into the Atlantic Ocean, the gradual creep of increasing temperatures in an area that hasn't faced them pushing flowering and bee pollenation behaviors slowly out of synch.

The point about most such corruption (i.e., regions of potential instability) is that, for systems in quasi-equilibrium, the corruption usually has comparatively little impact over the short term. Most systems have regulating mechanisms that tend to correct for such instabilities - the office manager notices that one department is using more supplies than the other, and a bit of surveillance reveals that one employee is using dramatically more than he should be. At that point, the employee is summarily fired, and a new employee hired to replace him, and the message is made clear - you steal, you're gone. This tends to move the system back into equilibrium.

The regulation and action is not a normal event - it is only undertaken when corruption is noticed. It's a small "collapse", one that may result in some disruption of activities and hence impact the efficiency of the abstraction - and for the employee it results in a significant disruption from the way things were. However, such feedback cycles normally keep the system relatively stable.

However, over time, the corruption can become more endemic, and at a higher level of abstraction. The managing bridge inspector is lax in checking on reports, and the inspectors under him avoid looking at those places on the bridge that are awkward to get to or would require getting especially dirty, the comptroller in a company works with one or two accounts to falsify the books, the teacher at a university starts accepting bribes and sexual favors for grades, a company provides campaign donations to a politician in order to give them a tax break or special legislative consideration.

Note in this case that there are two levels of abstraction involved in all of these scenarios. Generally the role of a manager is to act as a governor or regulator on the actions of others, to provide negative (damping) feedback to minimize corruption in a system. When that feedback is subverted, it amplifies the corruption rather than reducing it, and it makes it increasingly likely that the feedback will start to compromise the stability of the abstraction layer.

One of the more interesting phenomena that takes place in systems of abstraction is the paradox that the longer a system remains in equilibrium, the more likely that it will become unstable. In order to understand why, consider that most corrective feedback occurs only after a problem has reached a crisis point - the office manager finds that supplies she just ordered are gone, and she can't think of any legitimate reason why they would be. As she is responsible for her budget, she knows that she has less lattitude if excess pilferage is reducing her budget (and that she could face "corrective action" if such thefts continue to be unexplained.

However, as the organization gets larger, the office manager has more responsibilities, and tracking down pilferage drops down the list. The attitude begins to form that office supplies are fair game, and people begin more inclined to take supplies whether they need them or not - and those office supplies begin to move towards bigger ticket ideas like computers and projectors. Expense accounts start to become padded, and pretty soon begin to become a significant part of a person's income. Eventually the amounts begin to become high enough that it impacts the bottom of the line of the company, particularly if the comptroller and his friends in accounting are in on it (getting kickbacks for equipment that's disappearing).

What makes this worse is that it has gone from being an isolated instance to becoming pervasive and endemic. You can't fire everyone without bringing the company to its knees. Eventually you are forced to fire the comptroller, establish a new-tight accounting system for all internal goods and services, alienate a number of your employees who had come to see the office supplies as a right, and then spend several months searching for a new accounting team.

Stability breeds complacency, which breeds instability. Hyman Minsky, an economist in the 1960s, laid out this hypothesis for financial systems, but it holds in most complex multi-layered abstraction systems. Deregulation of the banking industry, low interest rates on the part of the Federal Reserve and a push towards home-ownership in the early 2000s meant that bankers could make higher risk mortgage loans to increasingly unqualified buyers then sell these loans to other financial institutions. These financial companies would combine these mortgages in novel (and dangerous) ways and sell them as financial vehicles to investors. The investors would then use these securities as collateral to build increasingly unsustainable leverages, while insurance companies sold "black swan" insurance that they never expected to pay off in order to make these securities palatable to accountants. Meanwhile, the real estate agents worked with the appraisal firms and builders in order to get the largest payback in fees, and homeowners in turn found themselves forced to take out ever larger loans for the same properties.

Low initial rates on loans were reset after a specific period to a much higher rate, and people began to fall behind on their payments, in time, the cascade of defaults and jungle mail cascaded through the system. The rapidly receding value of these assets caused a Minsky Moment in September 2009, as a key investment bank, Lehman Brothers, was allowed to go bankrupt. Because of the unwinding of the positions that Lehman had held, this created the financial equivalent of a heart attack as credit disappeared from the market overnight.

Starved for credit, companies could no longer sustain regular payroll, watched their energy supply (cash inflow) dry up as consumers pulled back abruptly in spending and soon were forced into rapid liquidation. Abstractions were unwound as energy (in the form of credit) disappeared from the system. Unemployment shot up as millions of people were forced out of work, accelerating the crisis, while attempts to recapitalize the banks have so far fallen short of solving the systemic problems.

Instabilities - turbulence - disrupted each layer of abstraction. This process is still ongoing, and will likely take one to two more years in order for the turbulence to dissipate to a level that new structures can start to form again, at a lower energy level.

The lesson for futurists - first, disruptions do not happen without reason. Most disruptions occur when a stable (complacent) abstraction becomes corrupt and brittle. In a recently stable scenario, external stimulae impinge upon the system all the time, but the system is resilient enough to ward them off. As systems become more mature they become more fragile, and their ability to adapt becomes increasingly compromised. Eventually, a stimulus occurs that causes a breakdown of a particular part of the system, and the system has become so interdependent that this shock then gets passed on, destructively, to other subsystems. The subsequent loss of system integrity can prove fatal, and the system will unwind to a less energy intensive state as energy bleeds into turbulence.

In general, you cannot predict what the shock will be that will ultimately send a system over the edge, and its futile to try. What's important is to examine whether, given a shock, a system is resilient enough to absorb it, or whether the shock will prove devastating. The role of both analysts (who are futurists) and regulators is to do the research to determine what organizations are too fragile, and then to examine the consequences that a shock to the system will have.

One final note here: a good place to look are organizations that are deemed "to big to fail". Most likely once a company (or a government) reaches that point, it is definitely overdue for an earthquake.