Bullshit Industry Prologue - Thought Factories & the Invasion of the RoombaTurks! Section 1

Tuesday, July 20th 2010

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one these intellectual activities, an ultraintelligent machine could design even better machines ... [this machine would be] the last invention that man need ever make." (I.J. Good)

"If you really think about it, the fact that anything on a computer works is amazing. At a low level, magnets read and write ones and zeros on ridiculously fast rotating platters, and then are assembled into files, which then is stored in memory, which is then passed through a video card and converted into some format that can be displayed on a screen. Throw in networked computers and the potential for signal loss over long distances and the probability that something at some point in the process will fail, and the potential for failure increases exponentially. Maybe I'm alone, but I'm in awe of the fact that my computer doesn't just randomly catch fire and explode." (Jake Vinson - Daily WTF, o1 Aug 2007)

The Future Doesn't Need Us

[Econometrics is...] the art of drawing a crooked line from an unproved assumption to a foregone conclusion." (From Peter Kennedy's "A Guide to Econometrics" (MIT Press, 1992. pp 7)

Newlan's Truism: An "acceptable" level of unemployment means that the government economist to whom it is acceptable still has a job.

There's an IBM commercial set in the near-future. It features a shady looking character, possibly a criminal, who makes his way through a super market, concealing various items under his coat. When he leaves the super market, an imposing security guard stops him. You would expect the guard to go, "busted!" and perhaps collar the young guy for shop-lifting. But no, he just says: "Excuse me sir, you forgot your receipt." The voice-over says, smugly: "Check-out lines, who needs 'em. This is the future of e-business." Due to technology that automatically scans and then bills customers for purchases as they leave the supermarket, there is no longer any need for super-market check-out staff.

Bill Joy coined the phrase the "future doesn't need us". What he meant was the technology of the future will operate without human intervention. Why? When someone is having trouble doing something, or it requires effort to do it, the technically minded person will think to herself: "there must be a better way!".

So she will go off and invent a new tool or a new technique to make it easier. Technology is all about eliminating toil. It is about reducing work and, in the end, eliminating work altogether.

Some don't like it. David Bacon writes:

Structural change has transformed the way the nation buys its food, and it's squeezing workers in the stores against the wall. Mom-and-pop food stores and small markets were swallowed up so long ago that that they're not even part of the memories of most young working families. In the old-style supermarkets which replaced them, dozens of workers walked the aisles at night with clipboards, counting and pricing items on the shelves, while checkers hand-punched the buttons on cash registers. Those days are gone as well.

Taking their place are automated supermarkets where a few workers now do the work which used to employ many. Food stores are no longer filled with workers hand counting items or punching buttons. Automated systems scan items for prices at checkout counters, and automatically update the stores' inventory records at the same time.

From Animal to Mechanical Power

According to Bruce L Gardner, the first petrol-driven tractor was built in 1892, and the first commercial tractor business in the US was Hart and Parr, who released their first commercial tractor in 1901. The number of horses in use in American agriculture reached its highest point in early 1915, the number of mules in 1925. He says: "In the forty-five year period from 1915 to 1960 the transition from animal to mechanical power was completed." New technology puts living things with limited skills, like the noble draft animal, permanently out of work. Obsolete. Why not people too?

Human brains are complex systems, to be sure, but there is no reason a combination of technology won't render them as irrelevant as the noble draft animal. The draft animal is just less complex than a human. As technology becomes more capable it will be able to do a great deal of what the human mind does.

Kit Sims Taylor quotes two sources at the beginning of his paper, The Brief Reign of the Knowledge Worker: Information Technology and Technological Unemployment:

The general theoretical proposition that the worker who loses his job in one industry will necessarily be able to find employment, possibly after appropriate retraining, in some other industry is as invalid as would be the assertion that horses that lost their job in transportation and agriculture can necessarily have been put to another economically productive use. - Wassily Leontief, source unknown.

... today's tech-savvy, well-compensated worker could become an expensive anachronism as tomorrow's technological advances offer new opportunities for slashing costs and improving economies of scale. A world filled with smart computers, all linked via the Internet, could easily undermine whole sectors of today's vibrant service and information industries. In the next century, lawyers, accountants, and brokers could be the secretaries, bank tellers, and mainframe operators of the 1980s. - Business Week (1998)

New Work?

But the elimination of some work does not equate to the elimination of work, full stop. Technological change brings with it new work. It allows people to carry out new kinds of work previously impossible. Technology substitutes for some skills, true, but people put their talent to use in other areas by using the new technology. New technology will require new skills.

So just as better, more efficient cars need fewer mechanics to service them, more reliable computers need fewer techs to repair them and, perhaps, new forms of plumbing require fewer plumbers, the talents inherent in those jobs can be applied as new skills to new technologies that do need repair work, augmentation, invention, and so on.

Some folks may even find they can do things they previously couldn't, thanks to new technology. It changes what people can do.

Technology also changes what people have to do. Work becomes less about survival and more and more about satisfying different wants; TVs, houses, new clothes, holidays, gadgets, etc. Perhaps the work of the future will be in more esoteric New Age realms as people demand self actualisation services. :-)

As technology improves our tools, it allows us to harness new forms of intelligence, too. In primitive societies, the talents inherent in writing complex search algorithms would not be as useful as they are now. Just as the physical skills involved in hunting may not be as useful now. Except perhaps at single's bars. Who knows what latent skills and proclivities will turn into useful technological skills over the next 100 years.

People just need to adapt their talents using technology into skills that are in demand. As Ron Faulds and Barb Fardell of the Michigan Department of Education pithily put it:

According to the U.S. Department of Labor 1 out of 4 workers today is working for a company he has been employed by for less than one year. More than 1 out of 2 are working at their current job for less than 5 years. The top 10 in-demand jobs in 2010 didn't [sic] exist in 2004.

How do we prepare our students for 14 different jobs and several different careers? We are currently preparing students for jobs that don't yet exist ... using technologies that haven't been invented ... in order to solve problems we don't even know are problems yet.


The elimination of older forms of work is mostly a Good Thing, and improves our lot over time. No-one, bar perhaps the occasional odiferous weirdo, misses the passing of the medieval job of the Fuller, for example:

The 13th century is boom time for the wool trade. With three sheep to every man, woman and child, wool is our biggest export. But nobody likes stiff and itchy cloth that falls to pieces, so we have several openings for fullers. As a fuller, you are expected to walk up and down all day in huge vats of stinking stale urine. The ammonia produced by the rotten wee may make your eyes water, but it creates the softest cloth by drawing out the grease (lanolin) from the wool. If you can dance up to your knees in urine for around two hours per length of cloth, you'll succeed in closing the fibres of the wool and interlocking them to produce cloth that is kind to the skin. You will be doing your part, along with the weavers, dyers and merchants, in making it a world-beating export.

But just as each successive technology depends on the technology that came before it, it also builds on it. Over time the technology is refined using other technology; inventors will be able to invent cleverer and cleverer inventions as the technology available to them improves; they are literally building on the cleverness of previous generations. "Productivity" increases. Because technology builds on itself in this way it increases in sophistication with each generation of invention. Change speeds up, if you will.

What happens when technology can invent itself? Now imagine a new type of technology is invented and a factory automatically adjusts and starts building products based on that technology. How would the factory "know" what people - or other automated factories - wanted? What about another automated system that tracks consumer preferences via sensors, computer networks and sales via inventory systems? An automated truck could collect the product from the factory, and then go to a shop and unload the product. What has happened? An entire chain of production and distribution has been automated. Technology starts inventing, producing and distributing itself.

The interesting thing is that just as each generation of technology eliminated some form of technological work, each generation of market technology, gradually eliminates some type of human activity in the market system: money eliminated traditional barter, online commerce eliminates the exchange of physical money (and maybe even re-introduces the concept of barter in a more sophisticated form by matching swaps!), bar codes eliminated some aspects of stock-keeping, IBM's scenario of the future even eliminates the seller as a human element. When entire supply chains, from production to distribution start to operate without human intervention or even human supervision, the market system starts to turn "invisible"; people are not aware of its functioning, and people are not needed to make it function either.

Murray N. Rothbard wrote in 1969:

... it should be clear that things cannot determine prices. Things, whether pieces of money or pieces of sugar or pieces of anything else, can never act; they cannot set prices or supply and demand schedules. All this can be done only by human action: only individual actors can decide whether or not to buy; only their value scales determine prices.

But there is no reason this is always going to be true. "Things" could well start determining prices, values and preferences could be generated algorithmically. By observing and recording your activities computers could discern patterns in your behaviour and adjust production and distribution to meet your needs, where before prices and sales were required to give the signals required to do so.

This is already happening. Consider the some advertising is now about observation of online behaviour. It sends messages to software that then adjusts web pages to suit different types of behaviour. This is just the beginning.

Once the business process itself - invention, production, distribution, market analysis, Joseph Schumpeter's entrepreneurship - become objects of technological innovation, then business skills, bidding and trading start being made obsolete along with technical skills too. As Eliezer Yudkowsky writes:

"In everyday life, we underrate the importance of intelligence because our social environment consists of only other humans, who as a species are far more intelligent than mice or lizards. The rise of human general intelligence enormously transformed the world. Yet we may have only begun to see the effects of intelligence. In 1965, the Bayesian statistician I. J. Good published a paper titled "Speculations Concerning the First Ultraintelligent Machine", in which he suggested that a sufficiently intelligent AI could redesign itself to make itself smarter, and then, being smarter, re-reinvent itself and become smarter still - a positive feedback cycle. Good labeled this the "intelligence explosion". An intelligence explosion could reshape the universe more than all human actions up to this point." (Eliezer Yudkowsky, Singularity Institute for Artificial Intelligence)

When economic decisions are made and acted upon by machines instead of humans, each successive generation of technology automates more and more economic activity, and only those with the talent to develop skills in highly specialised, unautomated areas are needed. Those talents become rarer and more unique as technology automatically does more and more. Because the technology also makes each individual much, much more productive, the numbers of people required to work with it also goes down before total human obsolescence is reached. The era of technology generating work gradually comes to an end once most of the economic system is automatic or "invisible".

Example: Open Source Education

This starts with non-physical objects, such as videos, music, text, software and other digitally distributed items. But it will happen in strange ways as technology combines into new unforeseen forms. Consider, for example, the business of education and how it is affected by something that has grown extraordinarily along with the Internet: the open source method of software development.

In economic terms, educational institutions - along with associated guilds and societies - filter people with skills and limit the supply of those they qualify, thus maintaining the scarcity of qualified personnel. Thereby putting a market value on the qualifications - beyond what might apply because of a limited supply of talented people.

In the software industry, the same thinking applies. You can artificially create scarcity by controlling access to and so limiting the supply of software. This maintains profitability, just like limiting the number of people with qualifications maintains the value of your qualifications by limiting the supply of qualified folks on the jobs market.

[A lot of business is dependent on maintaining scarcity, even where superabundance exists. It literally manufactures scarcity. It takes various forms, but it usually involved "locking up" data into formats or into particular services. Consider digital music files. The actual costs of storing and distributing a typical song in mp3 format is about .02c. Yet the price of a song is much more than that.]

Open source software development is based around a different approach. The notion is that anyone can read the source code of the software and, if they participate in the project enough, actually modify that code. In some ways, open source is an economic innovation - people work on it, but have no interest in maintaining scarcity. Value comes from increasing usage and participation, not limiting it. The more people make suggestions, the more people use the code, the better it becomes, the wider the pool of talent that is available to the project, so the higher the standards required for code to make it into the source. And status comes with popularity and acceptance from your peers.

From an educational point of view, the open source model is not just for writing software, but also for learning about how software is written. It gives people new to programming a way to see what the old pros do, to read the work of others and then contribute their own once it has reached a certain level of competency. A stint on an open source project is like an apprenticeship. Open source generates a lot of research, a lot of "how to" guides, a lot of discussion -- in short, a lot of knowledge, as a result. By going through the apprenticeship you are "qualified". The proof is in your publicly accessible work.

By doing what they do online, in a peer-reviewed way, and contributing to functioning systems, open source programmers become "qualified". Their quality as programmers is established by their online body of work. The results of their work is there for everyone to see. It is a fairly transparent and accountable process. The same could apply in other areas - web sites establish a reputation for quality, for example, as reviewed by other web sites. For writers the same thing might apply.

This somewhat anarchic new world isn't perfect, of course. It is by no means a replacement for the pure, classical form of educational institution where eccentrics wander about in gowns and talk about topics only they know anything about. But in terms of more staid work qualifications, it is working. People from the open source world are being employed based on their work in open source software: often to carry on working in the same area.

What does this mean? It means open source participation can get you a job. If it can get you a job by doing open source work, it is a qualification. After all, that's the main point of the qualification, apart from adding letters to your name or making you sound important; to demonstrate that you know what you're doing to an employer or to clients. But it's a qualification that doesn't come from an educational institution. To some extent, the existing system of academic qualifications has been bypassed.

Open Source Society

The same approach could apply to other forms of education. Outside programming circles, the 'net is also gradually turning into more than just a source of information, it's turning into a social system, and, gradually, it is making a great deal of knowledge and networks of peer acceptance and review available to many more people. A great deal of that interaction can occur online - increasingly so with video conferencing, shared web sessions, and so on. For educational institutions, this means education - and the ways of garnering qualifications - are gradually turning "open source", in a way, too.

Why couldn't a journalist or an engineer do the same thing as an open source programmer - participate online, garner peer respect, do an apprenticeship and get "qualified"? Of course, scientists need to do experiments in labs, philosophers need time sitting around in Parisian cafes, amongst other things, engineers need to make sure their bridges stay up, but there's no reason a great many of the functions of educational institutions won't be gradually be rendered less important by online communities and new technology. Then, the economic innovation strikes - many more people with talent can garner qualifications in say, journalism, history, programming, science and parlay that into paid work. [Combined with technology reducing the need for those skills, that increases in the supply of skilled people means a decline in wages.]

Then what happens to educational institutions? Some stay useful as filtering and research organisations, such as Oxford or MIT, filled with esoterically minded eccentrics, but many less prestigious Universities start to see the value of their qualifications erode because there are more and more people getting jobs without them, having proven their stuff to the satisfaction of employers in the open source way. As more people with the talent and inclination enter the jobs market the supply of people with proven skills increases relative to the demand, the value of those skills - and related qualifications - goes down.

What has happened to the more humdrum jobs in traditional education, the parts that dealt with the economics of education? They are replaced by automated systems. A lot of administrative work is eliminated. Some of the less able academics also have trouble competing for students with other academics once students have a wider selection via the more internationalised world of open source education. The level of talent required is raised, only the most talented continue to prosper.

It has become something a cliche, but harnessing the collective efforts of many people efficiently can not only create useful software via open source; it can sort videos, provide news and reviews, prepare encylopaedia articles and so on. That is not to say this process is perfect - it isn't - but it is cheaper, and it produces work that is generally free from labour costs (organising, delivering and storing the information is another matter!). In some areas responsive to this form of collaborative work, cheap, imperfect efforts of the many replace the individual efforts of a few over time thanks to new networking technology. In other areas, inspired efforts of amateurs, well filtered by collective analysis, can match it against the efforts of professionals. Only the most talented will continue to prosper as paid professionals in face of competition from free and "good enough". Technology finds strange ways of eliminating work and reducing costs, but that is what it does eventually.

And all this open source work is mediated by machines, and this leads to ...

Thought Factories

In 1770, Wolfgang von Kempelen debuted to the court of Austrian Empress Maria Theresa his chess-playing machine. He called it The Turk. It was a box that contained what appeared to be sophisticated technology that allowed it to beat most people at a game of chess. A marvel for the times. With various different owners, the Turk toured the world. From Paris to America. In even beat Napoleon Bonaparte. The truth was it was an illusion. A clever hoax using magnets and machinery that concealed a chess player inside.

Now, what about a machine that gets humans to think for it? Sound outlandish? There already is one. Amazon.com have created such as service and called it MTurk, after Kempelen's hoax. Amazon say:

For software developers, the Amazon Mechanical Turk web service solves the problem of building applications that until now have not worked well because they lack human intelligence. Humans are much more effective than computers at solving some types of problems, like finding specific objects in pictures, evaluating beauty, or translating text. The Amazon Mechanical Turk web service gives developers a programmable interface to a network of humans to solve these kinds of problems and incorporate this human intelligence into their applications.

Artificial intelligence, long the holy grail of software developers, might just start out as a virtual Turk. Gradually, as patterns emerge and new algorithms are invented, artificial "thinking" takes the place of "real" thinking. In a seminal paper on artificial intelligence called "Computing Machinery and Intelligence", Alan Turing proposed a test to see if a machine was, indeed, "intelligent". To him, if a computer could imitate a person well enough in conversation, then it would pass the "Turing test": for all intents and purposes it was intelligent. That may be some way off. But computers that make use of human thought as part of their programming are already here. Kevin Kelly writes of the internet:

And who will write the software that makes this contraption useful and productive?

We will. Each of us already does it every day. When we post and then tag pictures on the community photo album Flickr, we are teaching the Machine to give names to images. The thickening links between caption and picture form a neural net that learns.

Think of the 100 billion times a day humans click on a webpage as a way of teaching the Machine what we think is important. Each time we forge a link between words, we teach it an idea. Wikipedia encourages its citizen authors to link each fact in an article to a reference citation. Over time, a Wikipedia article becomes totally underlined in blue as ideas are cross-referenced. That cross-referencing is how brains think and remember. It is how neural nets answer questions. It is how our global skin of neurons will adapt autonomously and acquire a higher level of knowledge.

The human brain has no department full of programming cells that configure the mind. Brain cells program themselves simply by being used. Likewise, our questions program the Machine to answer questions. We think we are merely wasting time when we surf mindlessly or blog an item, but each time we click a link we strengthen a node somewhere in the web OS, thereby programming the Machine by using it.

These are the "thought factories" of the future: where people do piece work using their intelligence, until machines can do it better. The "thought factory" won't be "intelligent", but it will get things done intelligently. The product won't be cars, it will be processed information. Over time, in the same way a lot of manual labour in factories was gradually eliminated, so will a lot of thinking.

Example: Classifying Pictures

Let's say you take a set of pictures and ask a person to annotate them with "key" words. So if you have a picture of a plane flying through the sky, the person might annotate it with something such as "jet", "sky" and "plane". What if you pushed the picture through a slot and asked the human on the other side to annotate it. You don't get to talk to the human, but the words they suggest appear on a screen.

Say the picture is of a polar bear walking across some snow. You get two results. The first:

"bear, polar, snow, tundra"

The second:

"polar, tundra, bear, snow, ice."

Which one was produced by a human and which was produced by a computer? Hard to tell isn't it!

The second annotation was done by a computer using software which analysed a set of annotations by humans and used it to allocate "key" words to new images (see Supervised Learning of Semantic Classes for Image Annotation and Retrieval, Gustavo Carneiro et al). Although this isn't a Turing test, as such, it is getting there. By using the intermediary step of a thought factory (a database of images tagged by humans). By observing and collecting data on humans, computers can start to mimick humans well enough to pass mini-Turing tests!

Technology does find odd ways of doing things humans once did or removing a lot of toil from our lives. A robot may not turn up to your office one day and say "G'day, I'm your replacement", or turn up to do your house-keeping, drive you car and mow the lawn, but over time technology will become more intelligent and do more and more work based around traditional "thinking".

How long before it starts to happen to the world of physical objects too? It will start by taking over administrative roles of various kinds in these two ways. First, with information about those objects - which can be digitally distributed, manipulated and Turkified in thought factories. Second, with ever more minute and detailed instructions from computers, that use humans as cheap forms of sensory input, basic information processing and as agents for physical interaction with the real world.


While new business opportunities do arise as new technology is put to use, the same technology will significantly lower the profits of all businesses over time. Mainly because the easier it is for a consumer to compare prices and products, the better her negotiating position with any given seller; she is simply more informed, and this makes price competition more intense for sellers. And what if this process of comparing products and prices is done automatically on the consumer's behalf in the "invisible" parts of the economy? Then the pressure on profits grows even more intense.

One example is the price of cars in the USA. Florian Zettelmeyer, Scott Morton and Jorge Silva-Risso wrote a paper in 2005 called "Cowboys or Cowards: Why are Internet Car Prices Lower?" They wrote that:

savings to [the] initial group of early adopters who use Autobytel.com are at least $240 million per year." And that "Since there are other referral and informational sites that may also help consumers bargain more effectively with dealers, we conclude that - at least at the time of our study -the Internet is facilitating a substantial redistribution of surplus in the retail auto industry." At the risk of being simplistic, this translates to: the internet is reducing the prices of cars and the profits made from selling them.

Seth Goldstein writes:

A lead is generated when a consumer clicks on an ad and is directed to a landing page - a web site that collects information critical to determining how valuable a potential customer is - and fills out a form. This form includes both contact and intention information about the consumer. This lead is then sold to an advertiser, who contacts the consumer to close the sale.

He then explains his business, called "/ROOT Markets":

We are building an exchange for the emerging market of Internet-generated leads. Applying the transparent structures of a financial exchange to the fractured inefficiencies of the current lead generation market ...

In other words, Mr. Goldstein is planning to turn what is an "unofficial" market in consumer preferences into an official one. It's not a huge leap of logic to see this sort of idea being used in combination with things such "thought factories" to essentially automate vast swathes of business that require quite sophisticated business skills at the moment.

A consequence of all these sorts of trends is a process known as "commodification". A product or service starts as a unique, pricey thing with special characteristics people desire. It sells well and profits are high. Other businesspeople see the success of the product, and mimick it, in the hope of making some money from the trend too. Competition on price increases. The different products offered by the various producers gradually become much the same. People simply buy the cheapest one. In addition, long term, as automated trading agents act on behalf of buyers and sellers in automated marketplaces, prices tend to be bargained down to minimal levels because each agent is as equally well informed as any other: in time this means all automatically produced goods and services are priced at the level that covers costs, but no more or less. Automated competition leads to automated goods and services being "commodified".

Nick Carr, the author of "IT doesn't Matter", wrote:

Although more complex and malleable than its predecessors, IT has all the hallmarks of an infrastructural technology. In fact, its mix of characteristics guarantees particularly rapid commoditisation. IT is, first of all, a transport mechanism - it carries digital information just as railroads carry goods and power grids carry electricity. And like any transport mechanism, it is far more valuable when shared than when used in isolation. The history of IT in business has been a history of increased interconnectivity and interoperability, from mainframe time-sharing to minicomputer-based local area networks to broader Ethernet networks and on to the Internet. Each stage in that progression has involved greater standardization of the technology and, at least recently, greater homogenization of its functionality. For most business applications today, the benefits of customization would be overwhelmed by the costs of isolation.

IT is also highly replicable. Indeed, it is hard to imagine a more perfect commodity than a byte of data - endlessly and perfectly reproducible at virtually no cost. The near-infinite scalability of many IT functions, when combined with technical standardization, dooms most proprietary applications to economic obsolescence. Why write your own application for word processing or e-mail or, for that matter, supply-chain management when you can buy a ready-made, state-of-the-art application for a fraction of the cost? But it's not just the software that is replicable. Because most business activities and processes have come to be embedded in software, they become replicable, too. When companies buy a generic application, they buy a generic process as well. Both the cost savings and the interoperability benefits make the sacrifice of distinctiveness unavoidable.

This leads to reduced profit margins, for ways people to earn a living from business. In areas where "stuff" is essentially free - software as a consequence of open source development models for example - all goods and services like it ends up being "free", because you can't compete with free, anyone can tell you that! The only way to stop this process speeding up is to hinder the technology through regulation of some kind. (This is where business interests and government converge)

Hang on!

But technology has some way to go, yet ...

Eye halve a spelling chequer

It came with my pea sea

It plainly marques four my revue

Miss steaks eye kin knot sea.

Eye strike a quay and type a word

And weight four it two say

Weather eye am wrong oar write

It shows me strait a weigh.

As soon as a mist ache is maid

It nose bee fore two long

And eye can put the error rite

Its really ever wrong.

Eye have run this poem threw it

I am shore your pleased two no

Its letter perfect in it's weigh

My chequer tolled me sew.

(Sauce unknown)

But Google now have a context sensitive spell checker. The "Did you mean?" response is just that. And it's based on millions of inputs by humans everyday. Try typing this sentence into Google's search box:

their is a big tree

Even problems like "context sensitive spell checking" are being solved by the thought factories.

Robot Overlords

Technology requires stable environments to work in: robots have difficulty doing things such as negotiating unknown terrain, dealing with lots of variables such as weather, or driving cars in peak hour traffic. Spelling checkers still have trouble working out what you meant. Software still infuriates people in its lack of sophistication. Robots still don't do the gardening. A lot of computers running all the niftiest new systems run command lines that don't look a lot different from the command lines in the 70s. But the real change is networking and the internet, really an early to mid 1990s phenomenon, hybrid computer systems that use thought factories are only a recent innovation made possible by the internet.

The corpus, the body of knowledge that the internet draws upon, is based on things that are demonstrably true, but much of it is a matter of opinion. Artificial Intelligence, just like human intelligence, has problems sorting through information and giving it different levels of credibility.

Oren Etzioni from the University of Washington talks about his project in artificial intelligence called "Know It All". "Know It All", and its successor, "Text Runner", attempts to extract meaning from text by relating pieces of information together. The source of the text is the internet. Extracting data from the web and making in meaningful is difficult, Etzioni says:

Often when I give this talk people say "wait a minute", what about time? The statement that Clinton was president of the USA was true on 1996, it's not true today. What about context? One of my favorite misunderstandings of the extraction system that we've been building is you ask it who was the first American in space and it says John Glenn. And if you go and drill down you find that comes from a page entitled "Common Misconceptions about Space". The system will happily believe that. Or matters of opinion, if you ask it who killed JFK, it'll come back and say "Elvis" because that's a popular answer on the web. Another issue is multiple word senses, if you "listen" to the web you think Amazon is a bookstore not a rainforest and Chevy Chase an actor not a suburb in Maryland. Et cetera. Et cetera. [talk 12:38]

So the scenario of technology gradually eliminating human work seems like an exercise in science fiction - we're talking about robots, thought factories, turkification, artificial intelligence and automated trucks. Why not jet-packs and warp drives and space-ships too! But beyond the speculation about specifics, which will seem naive and silly in time, there is a trend. The market system and technology generate work up to a certain point in certain areas. Then a turning point is reached. The market system and technology then gradually eliminate work in that area. The argument is this trend applies generally too: to most talents humans possess. As people usually have a limited set of natural talents and a limit to the number of skills they can develop, groups of people will gradually be made obsolete.

The counter-argument is that humans have infinite potential; technology releases more and more of that potential. Technology may eliminate work, even eliminate some skills, but humans are a mysterious bunch with all sorts of hidden depths. Technology lets humanity reach into those mysterious depths. As toil is eliminated humans get to concentrate increasingly on the aesthetic, on learning, on other areas we can't foresee; as the problem of scarcity is dealt with humans are freed from need and increasingly from want. It is an optimistic view, certainly. It also requires a big change in education, psychology and culture.

Another view is that skills require certain in-born talents and proclivities. Sadly, I will never be able to dunk a basketball like Michael Jordan, no matter how hard I try. No amount of work in the gym will turn me into a 6'6" ex-NBA shooting guard with my own line of sneakers. You just can't learn a skill if you don't have the aptitude for it. The best I can hope for is not to be embarrassed in my local C Grade competition. I am just not talented enough to make it in the NBA. It is possible more and people will be unable to participate in the market system in the future hinted at by IBM's marketing department. They are to future technology what the mule was to the tractor. And each of us will feel awfully mule-like one day if the automated progeny of the MTurk have their way in the bustling, automated, invisible, market place of the future.

The turning point for information-based work is probably a little way off. But it is foreseeable given existing trends. The turning point for manual factory work, for example, has already occurred, at least in the modern economies. Physical labour such as gardening, and so on, will be hit too: when technology can do things, one way or the other, such as fight fires, do street cleaning, gutter clearing and gardening reasonably well. Consider the "roomba" produced by iRobot, of which over 2 million have been sold. It hoovers your floor automatically. iRobot's market spiel goes a little like this:

How does Roomba know where to clean?

Do I have to program it? There's nothing to program. Roomba's sensors figure out the size and shape of your room, and continually readjusts its cleaning pattern until it has covered your whole floor, multiple times.

Will Roomba clean as well as I do?

Roomba is a robot, so it uses a different cleaning method than you would. Its compact size enables it to reach under furniture and other hard-to-reach places. Plus Because Roomba works at the press of a button, you can use it more frequently to help you maintain a cleaner home - all week long.

Can it pick up pet hair?

Roomba is great for picking up pet hair; its counter-rotating brushes actually pull the hair out of rugs and off your hard floor. Everyone's pet is different, but most tend to react with curiousity at first, then soon after get used to it. Say goodbye to pet-hair tumbleweeds.

Bill Gates wrote in Scientific American:

Imagine being present at the birth of a new industry. It is an industry based on groundbreaking new technologies, wherein a handful of well-established corporations sell highly specialised devices for business use and a fast-growing number of start-up companies produce innovative toys, gadgets for hobbyists and other interesting niche products. But it is also a highly fragmented industry with few common standards or platforms. Projects are complex, progress is slow, and practical applications are relatively rare. In fact, for all the excitement and promise, no one can say with any certainty when--or even if--this industry will achieve critical mass. If it does, though, it may well change the world.

Of course, the paragraph above could be a description of the computer industry during the mid-1970s, around the time that Paul Allen and I launched Microsoft. Back then, big, expensive mainframe computers ran the back-office operations for major companies, governmental departments and other institutions. Researchers at leading universities and industrial laboratories were creating the basic building blocks that would make the information age possible. Intel had just introduced the 8080 microprocessor, and Atari was selling the popular electronic game Pong. At homegrown computer clubs, enthusiasts struggled to figure out exactly what this new technology was good for.

Before human intervention and pet-hair tumbleweeds are entirely eliminated, though, humans will be made so much more efficient by technology that fewer and fewer people will be required get the same results. As more and more people compete for fewer and fewer jobs, the value of labour goes down. So will spending power. Debt will soar. But that's a story for another article.

Some personal services, such as New Age practitioners, artisans, artists, some businessmen, and other even more nebulous professions will probably last the longest, because they are much harder to define and automate. Perhaps the Turkification and thought factories of millions of hobbyists will produce lots of work that, once well filtered by technology, replaces a lot of professional artists, musicians, businessmen and writers too?

This is great in one sense: toil is being eliminated, people have more free time, people don't have to work as hard, and what they do do is more interesting. The market system is much more efficient. But how does the market system distribute the fruits of this technological change?

The Invisible Economy & the Bullshit Industry

Gradually, as technology starts building on itself, as aspects of the economy become "invisible"; only the most talented businesspeople can still earn a living from business; only the most talented workers are needed to work with the technology. The level of talent required increases with each successive generation of technology. The pool of talent is smaller. But so is the number of people required to do any work at all. Competitive pressures ensure that the businesses that don't make the most of these changes eventually fail. As Vernor Vinge writes:

The competitive advantage - economic, military, even artistic - of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first.

Long term, as the proportion of the economic activity taken over by automated systems increases, competition, work, trade, bidding and so on will be abstracted. The human social system will be all about administering how the scarce resources automatically produced are distributed amongst the humans. It is already happening. It's just a lot of people don't realise it.

We are entering the era of ...

The Bullshit Industry