Winglod

Friday, December 13th 2013

Winglod

Imirald

Thursday, December 12th 2013

Imirald

Ploks

Wednesday, December 11th 2013

Ploks

Population 0 Technologies, Lost Thoughts and the Wizards of the Future

Wednesday, December 11th 2013

As technology evolves it develops more and more layers of technology. One type of technology allows for another type of technology that allows for another type of technology and so on.

Human understanding of the technology, depending on where you specialise in the stack, tends to abstract the layers below it and above it.

Collectively, there is an abstract knowledge of most layers of the stack. Socially, there is a specialist class of humans that understands that knowledge and works with that stack of technology.

As you go down to each level of the stack of technology, the number of humans involved tends to go down and the effects of their decisions become more influential.

At some point, at lower levels of abstraction, the number of humans involved directly will be 0. The humans who work at next layer have an entirely abstract understanding of the layer just below.

Over time, should humans try to intervene at a population: 0 level in the stack they will find it too complex to intervene in. They simply won't understand the side-effects their adjustments will have. It will be too risky to make adjustments. They will only be able to make adjustments at the next layer (and abstraction) up.

What about human organisation? Kim the production line worker thinks in terms of specific tasks with physical objects and his pay. His line manager thinks in terms of the politics of the people on the floor, throughput, quality measurements, working hours, etc. Bob the CEO thinks in "strategic directions", board meetings, stock prices, and other generalities. An investor just thinks in stock, shares, puts, bonds, etc. Thinking becomes more abstract the further you move up a hiearchy. This is true of technology as well.

But!

Think about the number of people who use mobile phones versus the number of people who build and design them. Or think about the number of queries humans make to a search engine versus the number of people working on the search engine. Or the number of people who use cars versus the number of people who build and design them.

As technology becomes more capable of replacing human labour, the trend becomes more and more definite:

  • With human hiearchies the number of humans becomes less numerous the further you move up the the hiearchy.

  • In terms of human populations of specialists at the various levels of the technology stack, the number of humans to become more numerous the further you move up the technology stack.

The problem is the current hierarchy developed by humans has not evolved to be top heavy. Just the opposite. But the technological hierarchy, if you will, invariably blends with the traditional one. What happens as a result is encapsulated in my vague ramblings about the Bullshit Industry.

At some point, though, something weird happens: all humans become a layer in the technology stack. We interact with the abstraction that represents the layers below us in the stack. But, over time, technology builds on top of us - if you think of humans as "technology" - and interacts with us as an abstraction too.

Our hierarchical thinking tends to see this is bad. We want to be at the top, the Kings of Technology. We don't want to be mere vassals of Lord Skynet. Bill Joy quotes Ted Kazynski, the Unabomber in his article "The Future Doesn't Need Us":

If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions.

And …

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system.

I think our experience as a layer in the stack will be more complex. The purpose of most machines at the moment is clear: a car moves you from A to B. A phone lets you talk to friends. A credit card allows you to make (abstract) transactions with other humans. And so on. Automation doesn't change our understanding much. At this point. A self-driving car still moves you from A to B.

But, eventually, the purpose of some technologies will become more opaque: we won't understand what the technology is for. It will just operate. This sort of technology will bubble up from layers in the technology stack where humans have become obsolete; where human's specialist skills are no longer required. Where the population of humans specialists is 0. These bubblings will be influential and change what people do at all levels of the stack.

These population 0 technologies won't just manifest themselves as obvious, nameable, inscrutable things doing stuff that we don't understand. More usually, they will manifest themselves as trends, as unpredictable congruences of circumstance and technology. As complex changes we didn't foresee, activities by machines that no-one specified, as systems that have grown so complex we can't adjust their trajectory without risking unknown, possibly risky side-effects.

All this activity will be being accessed by layers of technology "above" us in the tech stack as yet another abstraction.

In some ways you would have no way of knowing if this hadn't already happened :-).

Whether this is good or bad, or what it means (or, indeed, the whole notion of stacks of technology, etc) are probably the wrong questions to ask in the wrong way about the wrong things. The ways we currently understand the world (a product of the limits of our senses and brains) are fairly inadequate to explain what is going on. In the same way our senses limit out imaginations to visualising 3 dimensions, despite maths telling us we can imagine many more, the limits of cognition to deal with complexity at massive scale will limit our capacity to understand all this evolved complexity.

Moreover,, our ability to see the difference between what we naturally think about things and what the machines have thought about things for us will be difficult to discern! Not that it really matters.

At which point the question becomes: what ways of thinking have we lost? It would be hard to tell. If you travelled to this future from our present the humans of the future might regard you with interest:

"Wow! She thinks that way! She is so self-contained! And yet so unable to function well. I never expected to encounter one of those in my life. I have only seen the simulations!"

They may even be impressed (or even slightly rueful) that they have lost so many of the old, organic ways of thinking. But, to you, these humans will be oddly inhuman and inscrutable. Complex and magical. Perhaps even a little wizard-like and dangerous. You'll regard them with fearful, suspicious, atavistic eyes.

Until, of course, you realise you're just playing a computer game.

outside

(Thanks to Emlyn for inspiring my ramblings)

Mylog

Tuesday, December 10th 2013

Gortun

Gortun

Monday, December 9th 2013

Gortun

We're at the bottom of page 92.

Click a page number above to go to that page.