Feeds:
Posts
Comments

Archive for the ‘Computers’ Category

And I have to start this essay with a simple statement that it is not lost on me that all of the above is 100% derived from my own history, studies, jobs, art works, and everything else that goes into me.  So maybe this is just a theory of myself or not even a theory, but yet another expression in a life time of expressions.   At the very least I enjoyed 20 hrs of re-reading some great science, crafting what I think is a pretty neat piece of art work, and then summarizing some pondering.   Then again, maybe I’ve made strides on some general abstract level.  In either case, it’s just another contingent reconfiguration of things.

At the end I present all the resources I read and consulted during the writing (but not editing) and the making of the embedded 19×24 inch drawing and ink painting (which has most of this essay written and drawn into it).   I drank 4 cups of coffee over 5 hrs, had 3 tacos and 6 hotwings during this process. Additionally I listened to “The Essential Philip Glass” while sometimes watching the movie “The Devil Wears Prada” and the latest SNL episode.

——————-  

There is a core problem with all theories and theory at large – they are not The t=Truth and do not interact in the universe like the thing they refer to.   Theories are things unto themselves.  They are tools to help craft additional theories and to spur on revised dabbling in the world.

FullSizeRender (4)

We have concocted an unbelievable account of reality across religious, business, mathematical, political and scientific categories.  Immense stretches of imagination are required to connect the dots between the category theory of mathematics to radical behaviorism of psychology to machine learning in computer science to gravitational waves in cosmology to color theory in art.  The theories themselves have no easy bridge – logical, spiritual or even syntactically.

Furthering the challenge is the lack of coherence and interoperability of measurement and crafting tools.   We have forever had the challenge of information exchange between our engineered systems.   Even our most finely crafted gadgets and computers still suffer from data exchange corruption.   Even when we seem to find some useful notion about the world it is very difficult for us to transmit that notion across mediums, toolsets and brains.

And yet, therein lies the the reveal!

A simple, yet imaginative re-think provides immense power.   Consider everything as network.  Literally the simplest concept of a network – a set of nodes connected by edges.   Consider everything as part of a network, a subnetwork of the universe.  All subnetworks are connected more or less to the other subnetworks.   From massive stars to a single boson, all nodes in a network and those networks of networks.   Our theories are networks of language, logic, inference, experiment, context.  Our tools are just networks of metals, atoms, and light.   It’s not easy to replace your database of notions reinforced over the years with this simple idea.

But really ask yourself why that is so hard but you can believe that blackholes collide and send out gravitational waves that slightly wobble spacetime 1.3 billion light years away or if you believe in the Christian God, consider how that’s believable and that woman was created from a guy named Adam’s rib.    It’s all a bit far fetched but we buy these other explanations because the large network of culture and tradition and language and semiotics has built our brains/worldviews up this way.

Long ago we learned that our senses are clever biological interpreters of internal and external context.  Our eyes do not see most of “reality” – just a pretty course (30 frames per second) and small chunk of electromagnetic waves (visible light).   in the 1930s we learned that even mathematics itself and the computers we’d eventually construct can not prove many of the claims they will make, we just have to accept those claims. (incompleteness and halting problem.).

These are not flaws in our current understanding or current abilities.  These are fundamental features of reality – any reality at all.  In fact, without this incompleteness and clever loose interpretations of information between networks there would be no reality at all – no existence.   This is a claim to return to later.

In all theories at the core we are always left with uncertainty and probability statements.   We cannot state or refer to anything for certain, we can only claim some confidence that what we’re claiming or observing might, more or less, be a real effect or relation.   Even in mathematics with some of the simplest theorems and their logical proofs we must assume axioms we cannot prove – and while that’s an immensely useful trick it certainly doesn’t imply that any of the axioms are actually true and refer to anything that is true or real.

The notion of probability and uncertainty is no easy subject either.   Probability is a measure of what?   It is a measure belief (Bayes) that something will happen given something else?  Is it a measure of lack of information – this claim is only X% of the information?  Is it a measure of complexity?

IMG_4369

Again, the notion of networks is incredibly helpful.  Probability is a measure of contingency.   Contingency, defined and used here, is a notion of connectivity of a network and nodes within the network.  There need be no hard and fast assignment of the unit of contingency – different measures are useful and instructive for different applications.  There’s a basic notion at the heart of all of them: contingency is a cost function of going from a configuration to another configuration of the network.

And that leads to another startling idea.   Spacetime itself is just a network.  (obvious intuition from my previous statement) and everything is really just a spacetime network.    Time is not the ticks on a clock nor an arrow marching forward.  Time is nothing but a measure of steps to reconfigure a network from state A to some state B.   Reconfiguration steps are not done in time, they are time itself.

(most of my initial thinking comes from Wolfram and others working on this long before my thinking about it: http://blog.stephenwolfram.com/2015/12/what-is-spacetime-really/ – Wolfram and others have done a ton of heavy lifting to translate the accepted theories and math into network terms).

This re-framing of everything into network thinking requires a huge amount of translation of notions of waves, light, gravity, mass, fields, etc into network conventions.  While attempting to do that in blog form is fun and I’ve attempted to keep doing it, the reality of the task is that no amount of writing about this stuff will make a sufficient proof or even useful explanation of the idea to people.

Luckily, it occurred to me (a contingent network myself!) that everyone is already doing this translation and even more startling it couldn’t go any other way.   Our values and traditions started to be codified into explicit networks with the advent of written law and various cultural institutions like religion and formal education.   Our communities have now been codified into networks by online social networks.  Our location and travels have been codified by GPS satellites and online mapping services.  Our theories and knowledge are being codified into Wikis, Programs (Wolfram Alpha, Google Graph, Deep Learning networks, etc).   Our physical interpretations of the world have been codified into fine arts, pop arts, movies and now virtual and augmented realities.   Our inner events/context are being codified by wearable technologies.    And now the cosmos has unlocked gravitational waves for us so even the mystery of black holes and dark matter will start being codified into knowledge systems.

It’s worth a few thoughts about Light, Gravity, Forces, Fields, Behavior, Computation.

  • Light (electromagnetic wave-particles) is the subnetwork encoding the total configurations of the entire universe and every subnetwork.
  • Gravity (and gravitational wave-particles) is the subnetwork of how all the subnetworks over a certain contingency level (mass) are connected.
  • Other 3 fundamental Forces (electromagnetics, weak nuclear, strong nuclear) are also just subnetworks encoding how all subatomic particles are connected.
  • Field is just another term for network, hardly worth a mention.
  • Behavior observations are partially encoded subnetworks of the connections between subnetworks.  They do not encode the entirety of a connection except for the smallest, most simple networks.
  • Computation is time is the instruction set is a network encoding how to transform one subnetwork to another subnetwork.

These re-framed concepts allow us to move across phenomenal categories and up and down levels of scale and measurement fidelity.  They open up improved ways of connecting the dots between cross-category experiments and theories.   Consider radical behaviorism and schedules of reinforcement combined with the Probably Approximately Correct learning theory in computer science against a notion of light and gravity and contingency as defined above.

What we find is that learning and behavior based on schedules of reinforcement is actually the only way a subnetwork (say, a person) and a network of subnetworks (a community) could encode the vast contingent network (internal and external environments, etc).   Some schedules of reinforcement maintain responses better than others, and again here we find the explanation.  Consider a Variable Ratio schedule reinforcing a network.  (see here for more details: https://en.wikipedia.org/wiki/Reinforcement#Intermittent_reinforcement.3B_schedules).   A variable ratio (a variations/compositions on this) schedule is a richer contingent network itself that say a fixed ratio network.  That is, as a network encoding information between networks (essentially a computer program and data) the variable ratio has more algorithmic content to keep associations linked after many related network configurations.

Not surprisingly this is exactly the notion of gravity explained above.  Richer, more complex networks with richer connections to other subnetworks have much more gravity – that is they attract more subnetworks to connect.  They literally curve spacetime.

To add another wrinkle in theory, it has been observed in a variety of categories that the universe seems to prefer computational efficiency.  Nearly all scientific disciplines from linguistics to evolutionary biology to physics to chemistry to logic end up with some basic notion of “Path of Least Effort” (https://en.wikipedia.org/wiki/Principle_of_least_effort).  In the space of all possible contingent situations networks tend to connect in the computationally most efficient way – they encode each other efficiently.  That is not to say it happens that way all the time.  In fact, this idea led me to thinking that while all configurations of subnetworks exist, the most commonly observed ones (I use a term: robust) are the efficient configurations.  I postulate this explains mathematical constructs such as the Platonic solids and transcendental numbers and likely the physic constants.  That is, in the space of all possible things, the mean of the distribution of robust things are the mathematical abstractions.  While we rarely experience a perfect circle, we experience many variations on robust circular things… and right now the middle of them is the perfect circle.

IMG_4366

Now, what is probably the most bizarre idea of all:  nothing is actually happening at the level of the universe nor at the level of a photon.  The universe just is.   A photon, which is just a single massless node, everything happens to it all at once, so nothing happens.

That’s right, despite all the words and definitions above with all the connotations of behavior and movement and spacetime… experience and happening and events and steps and reconfigurations are actually just illusions, in a sense, of subnetworks describing other subnetworks.   The totality of the universe includes every possible reconfiguration of the universe – which obviously includes all theories, all explanations, all logics, all computations, all behavior, all schedules in a cross product of each other.   No subnetwork is doing anything at all, it simply IS and is that subnetwork within the specific configuration of universe as part of the wider set of the whole.

This sounds CRAZY.   until you look back on the history of ideas, this notion has come up over and over regardless of the starting point, the condition of the observational tools, the fads of language and business of the day.  It is even observable in how so many systems “develop” first as “concrete” physical, sensory things… they end up yielding time and time again to what we call the virtual – strangely looping recursive networks.   Here I am not contradicting myself, instead… this is what exists within the fractal nature of the universe (multiverse!) it is self similar all the way up and down scales and across all configurations (histories).

Theories tend to be ignored unless they are useful.   I cannot claim utility for everyone on this theory.  I do find it helpful for myself in moving between disciplines and not getting trapped in syntactical problems.   I find confirmation of my own cognitive bias in the fact that the technology of loosely connecting the dots like GPS, hyperlinks, search engine, social media, citation analysis, Bayes, and now deep learning/PAC have yielded tremendous expansion of information and re-imaging of the world.

IMG_4355

Currency, writing, art, music are not concrete physical needs and yet they mediate our labor, property, government, nation states.   Even things we consider “concrete” like food and water are just encodings of various configurations.  Food can be redefined in many ways and has been over the eons as our abstracted associations drift.   Water seems like a concrete requirement for us, but us is under constant redefinition.  Should people succeed in creating human-like (however you define it) in computers or the Internet it’s not clear water would be any more concrete than solar power, etc.

Then again, if I believe anything I’ve said above, it all already exists and always has.

 

———————————–

 

Chaitin on Algorithmic Information, just a math of networks.
https://www.cs.auckland.ac.nz/~chaitin/sciamer3.html

Platonic solids are just networks
https://en.m.wikipedia.org/wiki/Platonic_solid#Liquid_crystals_with_symmetries_of_Platonic_solids

Real World Fractal Networks
https://en.m.wikipedia.org/wiki/Fractal_dimension_on_networks#Real-world_fractal_networks

Correlation for Network Connectivity Measures
http://www.ncbi.nlm.nih.gov/pubmed/22343126

Various Measurements in Transport Networks (Networks in general)
https://people.hofstra.edu/geotrans/eng/methods/ch1m3en.html

Brownian Motion, the network of particles
https://en.m.wikipedia.org/wiki/Brownian_motion

Semantic Networks
https://en.wikipedia.org/wiki/Semantic_network

MPR
https://en.m.wikipedia.org/wiki/Mathematical_principles_of_reinforcement

Probably Approximately Correct
https://en.m.wikipedia.org/wiki/Probably_approximately_correct_learning

Probability Waves
http://www.physicsoftheuniverse.com/topics_quantum_probability.html

Bayes Theorem
https://en.m.wikipedia.org/wiki/Bayes%27_theorem

Wave
https://en.m.wikipedia.org/wiki/Wave

Locality of physics
http://www.theatlantic.com/science/archive/2016/02/all-physics-is-local/462480/

Complexity in economics
http://www.abigaildevereaux.com/?p=9%3Futm_source%3Dshare_buttons&utm_medium=social_media&utm_campaign=social_share

Particles
https://en.m.wikipedia.org/wiki/Graviton

Gravity is not a network phenomenon?
https://www.technologyreview.com/s/425220/experiments-show-gravity-is-not-an-emergent-phenomenon/

Gravity is a network phenomenon?
https://www.wolframscience.com/nksonline/section-9.15

Useful reframing/rethinking Gravity
http://www2.lbl.gov/Science-Articles/Archive/multi-d-universe.html

Social networks and fields
https://www.researchgate.net/profile/Wendy_Bottero/publication/239520882_Bottero_W._and_Crossley_N._(2011)_Worlds_fields_and_networks_Becker_Bourdieu_and_the_structures_of_social_relations_Cultural_Sociology_5(1)_99-119._DOI_10.11771749975510389726/links/0c96051c07d82ca740000000.pdf

Cause and effect
https://aeon.co/essays/could-we-explain-the-world-without-cause-and-effect

Human Decision Making with Concrete and Abstract Rewards
http://www.sciencedirect.com/science/article/pii/S1090513815001063

The Internet
http://motherboard.vice.com/blog/this-is-most-detailed-picture-internet-ever

Read Full Post »

We have a problem.

As it stands now the present and near future of economic, social and cultural development primarily derives from computers and programming.   The algorithms already dominate our society – they run our politics, they run our financial system, they run our education, they run our entertainment, they run our healthcare.    The ubiquitous tracking of everything that can possible be tracked determined this current situation.   We must have programs to make things, to sell things, to exchange things.

punchcard

The problem is not necessarily the algorithms or the computers themselves but the fact that so few people can program.    And why?   Programming Sucks.

Oh sure, for those that do program and enjoy it, it doesn’t suck. As Much.   But for the 99%+ of the world’s population that doesn’t program a computer to earn a living it’s a terrible endeavour.

Programming involves a complete abstraction away from the world and all surroundings.  Programming is disembodied – it is mostly a thought exercise mixed with some of the worst aspects of engineering.   Mathematics, especially the higher order really crazy stuff was long ago unapproachable and completely disembodied requiring no physical engineering or representation at all.  Programming, in most of its modern instances, consequences very far away from its creative behavior.  That is, in most modern system it takes days, weeks, months years to personally feel the results deeply of what you’ve built.    Programming is ruthless.  It’s unpredictable.   It’s 95% or more reinventing the wheel and configuring environments to even run the most basic program.  It’s all set up, not a lot of creation.   So few others understand it they can’t appreciate the craft during the act (only the output is appreciated counted in users and downloads).

There are a couple of reasons why this is the case – a few theoretical/natural limits and a few self-imposed, engineering and cultural issues.

First the engineering and cultural issues.   Programming languages and computers evolved rather clumsily built mostly by programmers for other programmers – not for the majority of humans.    There’s never been a requirement to make programming itself more humanistic, more embodied.    Looking back on the history of computers computing was done always in support of something else, not for its own sake.   It was done to Solve Problems.   As long as the computing device and program solved the problem the objective was met.   Even the early computer companies famously thought it was silly to think everyone one day might actually use a personal computer.   And now we’re at a potentially more devastating standstill – it’s absurd to most people to think everyone might actually need to program.    I’ll return to these issues.

Second the natural limits of computation make for a very severe situation.   There are simply things that are non-computable.   That is, we can’t solve them.   Sometimes we can PROVE we can’t solve them but that doesn’t get us any closer to solving some things.    This is sometimes called the Halting Problem.  The idea is basically that for a sufficiently complex program you can’t predict whether the program will halt or not.   The implication is simply you must run the program and see if it halts.  Again, complexity is the key here.  If these are relatively small, fast programs with a less than infinite number of possible outcomes then you can simply run the program across all possible inputs and outputs.   Problem is… very few programs are that simple and certainly not any of the ones that recommend products to you, trade your money on wall street, or help doctors figure out what’s going on in your body.

STOP.

This is a VERY BIG DEAL.    Think about it.   We deploy millions of programs a day with completely non-deterministic, unpredictable outcomes.  Sure we do lots of quality assurance and we test everything we can and we simulate and we have lots of mathematics and experience that helps us grow confident… but when you get down to it, we simply don’t know if any given complex program has some horrible bug in it.

This issue rears its head an infinite number of times a day.   If you’ve ever been mad at MS Word for screwing up your bullet points or your browser stops rendering a page or your internet doesn’t work or your computer freezes… this is what’s going on.  All of these things are complex programs interacting with other programs and all of them have millions (give or take millions) of bugs in them.  Add to it that all of these things are mutable bits on your computer that viruses or hardware issues can manipulate (you can’t be sure the program you bought is the program you currently run) and you can see how things quickly escape our abilities to control.

This is devastating for the exercise of programming.  Computer scientists have invented a myriad of ways to temper the reality of the halting problem.   Most of these management techniques makes programming even more mysteries and challenging due to the imposition of even more rules that must be learned and maintained.   Unlike music and writing and art and furniture making and fashion we EXPECT and NEED computers to do exactly what we program them to do.   Most of the other stuff humans do and create is just fine if it sort of works.  It still has value.  Programs that are too erratic or worse, catastrophic, are not only not valuable we want to eliminate them from the earth.   We probably destroy some 95%+ of the programs we write.

The craft of programming is at odds with its natural limits.   Our expectations and thus the tools we craft to perform program conflict with the actuality.  Our use of programs exceeds their possibilities.

And this really isn’t due to computers or programming, but something more fundamental: complexity and prediction.    Even as our science shows us more and more that prediction is an illusion our demands of technology and business and media run counter.    This fundamental clash manifests itself in programming, programming languages, the hardware of computers, the culture of programming.  It is at odds with itself and in being so conflicted is unapproachable to those that don’t have ability to stare maddeningly into a screen flickering with millions of unknown rules and bugs.   Mastery is barely achievable except for a rare few.   And without mastery enjoyment rarely comes – the sort of enjoyment that can sustain someones attention long enough to do something significant.

I’ve thought long and hard about how to improve the craft of programming.   I’ve programmed a lot, lead a lot of programming efforts, delivered a lot of software, scrapped a lot more.  I’ve worked in 10+ languages.  I’ve studied mathematics and logic and computer science and philosophy.  I’ve worked with the greatest computer scientists.  I’ve worked with amazing business people and artists and mathematicians.   I’ve built systems large and small in many different categories.  In short, I’ve yet to find a situation in which programming wasn’t a major barrier to progress and thinking.

The solution isn’t in programming languages and in our computers.  It’s not about Code.org and trying to get more kids into our existing paradigm. This isn’t an awareness or interest problem.   The solution involves our goals and expectations.

We must stop trying to solve every problem perfectly.  We must stop trying to predict everything.   We must stop pursuing The Answer, as if it actually exists.  We must stop trying to optimize everything for speed and precision and accuracy. And we must stop applying computerized techniques to every single category of activity – at least in a way where we expect the computer to forever to the work.

We must create art.  Programming is art.  It is full of accidents and confusions and inconsistencies.   We must turn it back to an analog experience rather than a conflicted digital.    Use programming to explore and narrate and experiment rather than answer and define and calculate.

The tools that spring from those objectives will be more human.  More people will be able to participate.  We will make more approachable programs and languages and businesses.

In the end our problem with programming is one of relation – we’re either relating more or less to the world around us and as computers grow in numbers and integration we need to be able to commune, shape and relate to them.

Read Full Post »

The human race began a path towards illiteracy when moving pictures and sound began to dominate our mode of communication. Grammar checking word processors and the Internet catalyzed an acceleration of the process. Smartphones, 3-D printing, social media and algorithmic finance tipped us towards near total illiteracy.

The complexity of the machines have escaped our ability to understand them – to read them and interpret them – and now, more importantly, to author them. The machines author themselves. We inadvertently author them without our knowledge. And, in cruel turn, they author us.

This is not a clarion call to arms to stop the machines. The machines cannot be stopped for we will never want to stop them so intertwined with our survival (the race to stop climate change and or escape the planet will not be done without the machines). It is a call for the return to literacy. We must learn to read machines and maintain our authorship if we at all wish to avoid unwanted atrocities and a painful decline to possible evolutionary irrelevance. If we wish to mediate the relations between each other we must remain the others of those mediations.

It does not take artificial intelligence for our illiteracy to become irreversible. It is not the machines that will do us in and subjugate us and everything else. Intelligence is not the culprit. It is ourselves and the facets of ourselves that make it too easy to avoid learning what can be learned. We plunged into a dark ages before. We can do it again.

We are in this situation, perhaps, unavoidably. We created computers and symbolics that are good enough to do all sorts of amazing things. So amazing that we just went and found ways to unleash things without all the seeming slowness of evolutionary and behavioral consequences we’ve observed played out on geological time scales. We have unleashed an endless computational kingdom of such variety rivaling that of the entire history of Earth. Here we have spawned billions of devices with billions and billions of algorithms and trillions and trillions and trillions of data points about billions of people and trillions of animals and a near infinite hyperlinkage between them all. The benefits have outweighed the downsides in terms of pure survival consequences.

Or perhaps the downside hasn’t caught us yet.

I spend a lot of my days researching, analyzing and using programming languages. I do this informally, for work, for fun, for pure research, for science. It is my obsession. I studied mathematics as an undergraduate – it too is a language most of us are illiterate in and yet our lives our dominated by it. A decade ago I thought the answer was simply this:

Everyone should learn to program. That is, everyone should learn one of our existing programming languages.

It has more recently occurred to me this is not only realistic it is actually a terrible idea. Programming languages aren’t like English or Spanish or Chinese or any human language. They are much less universal. They force constraints we don’t understand and yet don’t allow for any wiggle room. We can only speak them by typing them incredibly specific commands on a keyboard connected to a computer architecture we thought up 50 years ago – which isn’t even close to the dominate form of computer interaction most people use (phones, tablets, tvs, game consoles with games, maps and txt messages and mostly consumptive apps). Yes, it’s a little more nuanced than that in that we have user interfaces that try to allow us all sorts of flexbility in interaction and they will handle the translation to specific commands for us.

Unfortunately it largely doesn’t work. Programming languages are not at all like how humans program. They aren’t at all how birds or dogs or dolphins communicate. They start as an incredibly small set of rules that must be obeyed or something definitely will breakdown (a bug! A crash!). Sure, we can write an infinite number of programs. Sure most languages and the computers we use to run the programs written with language are universal computers – but that doesn’t make them at all as flexible and useful as natural language (words, sounds, body language).

As it stands now we must rely on about 30 million people on the entire planet to effectively author and repair the billions and billions of machines (computer programs) out there (http://www.infoq.com/news/2014/01/IDC-software-developers)

Only 30 million people speak computer languages effectively enough to program them. That is a very far cry from a universal or even natural language. Most humans can understand any other human, regardless of the language, on a fairly sophisticated level – we can easily tell each others basic state of being (fear, happiness, anger, surprise, etc) and begin to scratch out sophisticate relationships between ideas. We cannot do this at all with any regularity or reliability with computers. Certainly we can communicate with some highly specific programs some highly specific ideas/words/behaviors – but we cannot converse even remotely close with a program/machine in any general way. We can only rely on some of the 30 million programmers to improve the situation slowly.

If we’re going to be literate in the age of computation our language interfaces with computers must beome much better. And I don’t believe that’s going to happen by billions of people learning Java or C or Python. No it’s going to happen by the evolution of computers and their languages becoming far more human author-able. And it’s not clear the computers survival depends on it. I’m growing in my belief that humanity’s survival depends on it though.

I’ve spent a fair amount of time thinking about what my own children should learn in regards to computers. And I have not at all shaped them into learning some specific language of todays computers. Instead, I’ve focused on them asking questions and not being afraid of the confusing probable nature of the world. It is my educated hunch that the computer languages of the future will account for improbabilities and actually rely on them, much as our own natural languages do. I would rather have my children be able to understand our current human languages in all their oddities and all their glorious ability to express ideas and questions and forever be open to new and different interpretations.

The irony is… teaching children to be literate into todays computer programs as opposed to human languages and expresses, I think, likely to leave them more illiterate in the future when the machines or our human authors have developed a much richer way to interact. And yet, the catch-22 is that someone has to develop these new languages. Who will do it if not myself and my children? Indeed.

This is why my own obsession is to continue to push forward a more natural and messier idea of human computer interaction. It will not look like our engineering efforts today with a focus on speed and efficiency and accuracy. Instead it will will focus on richness and interpretative variety and serendipity and survivability over many contexts.

Literacy is not a complete efficiency. It is a much deeper phenomena. One that we need to explore further and in that exploration not settle for the computational world as it is today.

Read Full Post »

The Point

Everything is a pattern and connected to other patterns.   The variety of struggles, wars, businesses, animal evolution, ecology, cosmological change – all are encompassed by the passive and active identification and exploitation of changes in patterns.

What is Pattern

Patterns are thought of in a variety of ways – a collection of data points, pictures, bits and bytes, tiling.   All of the common sense notions can be mapped to the abstract notion of a graph or network of nodes and their connections, edges.   It is not important, for the sake of the early points of this essay, to worry to much about the concept of a graph or network or its mathematical or epistemological construction.   The common sense ideas that might come to mind should suffice – everything is a pattern connected to other patterns. E.g. cells are connected to other cells sometimes grouped into organs connected to other organs sometimes grouped into creatures connected to other creatures.

Examples

As can be imagined the universe has a practically infinite number of methods of pattern identification and exploitation. Darwinian evolution is one such example of a passive pattern identification and exploration method. The basic idea behind it is generational variance with selection by consequences. Genetics combined with behavior within environments encompass various strategies emergent within organisms which either hinder or improve the strategies chance of survival. Broken down and perhaps too simplistically an organism (or collection of organisms or raw genetic material) must be able to identify threats, energy sources and replication opportunities and exploit these identifications better than the competition.   This is a passive process overall because the source of identification and exploitation is not built in to the pattern selected, it is emergent from the process of evolution. On the other hand sub processes within the organism (object of pattern were considering here) can be active – such as in the case of the processing of an energy source (eating and digestion and metabolism).

Other passive pattern processes include the effects of gravity on solar systems and celestial bodies on down to their effects on planetary ocean tides and other phenomena.   Here it is harder to spot what is the identification aspect?   One must abandon the Newtonian concept and focus on relativity where gravity is the name of the changes to the geometry of spacetime.   What is identified is the geometry and different phenomena exploit different aspects of the resulting geometry.   Orbits form around a sun because of the suns dominance in the effect on the geometry and the result can be exploited by planets that form with the right materials and fall into just the right orbit to be heated just right to create oceans gurgling up organisms and so on.   It is all completely passive – at least with our current notion of how life my have formed on this planet. It is not hard to imagine based on our current technology how we might create organic life forms by exploiting identified patterns of chemistry and physics.

In similar ways the trajectory of artistic movements can be painted within this patterned theory.   Painting is an active process of identifying form, light, composition, materials and exploiting their interplay to represent, misrepresent or simply present pattern.   The art market is an active process of identifying valuable concepts or artists or ideas and exploiting them before mimicry or other processes over exploit them until the value of novelty or prestige is nullified.

Language and linguistics are the identification and exploitations of symbols (sounds, letters, words, grammars) that carry meaning (the meaning being built up through association (pattern matching) to other patterns in the world (behavior, reinforcers, etc).   Religion, by the organizers, is the active identification and exploitation of imagery, language, story, tradition, and habits that maintain devotional and evangelical patterns. Religion, by the practitioner, can be active and passive maintenance of those patterns. Business and commerce is the active (sometimes passive) identification and exploitation of efficient and inefficient patterns of resource availability, behavior and rules (asset movement, current social values, natural resources, laws, communication medium, etc).

There is not a category of inquiry or phenomena that can escape this analysis.   Not because the analysis is so comprehensive but because pattern is all there is. Even the definition and articulation of this pattern theory is simply a pattern itself which only carries meaning (and value) because of the connection to other patterns (linear literary form, English, grammar, word processing programs, blogging, the Web, dictionaries).

Mathematics and Computation

It should be of little surprise that mathematics and computation forms the basis of so much of our experience now.   If pattern is everything and all patterns are in a competition it does make some common sense that efficient pattern translation and processing would arise as a dominant concept, at least in some localized regions of existence.

Mathematics effectiveness in a variety of situations/contexts (pattern processing) is likely tied to its more general, albeit often obtuse and very abstracted, ability to identify and exploit patterns across a great deal of categories.   And yet, we’ve found that mathematics is likely NOT THE END GAME. As if anything could be the end game.   Mathematics’ own generalness (which we could read as reductionist and lack of full fidelity of patterns) does it in – the proof of incompleteness showed that mathematics itself is a pattern of patterns that cannot encode all patterns. Said differently – mathematics incompleteness necessarily means that some patterns cannot be discovered nor encoded by the process of mathematics.   This is not a hard meta-physical concept. Incompleteness merely means that even for formal systems such as regular old arithmetic there are statements (theorems) where the logical truth or falsity cannot be established. Proofs are also patterns to be identified and exploited (is this not what pure mathematics is!) and yet we know, because of proof, that we will always have patterns, called theorems, that will not have a proof.   Lacking a proof for a theorem doesn’t mean we can’t use the theorem, it just means we can’t count on the theorem to prove another theorem. i.e. we won’t be doing mathematics with it.   It is still a pattern, like any sentence or painting or concept.

Robustness

The effectiveness of mathematics is its ROBUSTNESS. Robustness (a term I borrow from William Wimsatt) is the feature of a pattern that when it is processed from multiple other perspectives (patterns) the inspected pattern maintains its overall shape.   Some patterns maintain their shape only within a single or limited perspective – all second order and higher effects are like this. That is, anything that isn’t fundamental is of some order of magnitude less robust that things that are.   Spacetime geometry seems to be highly robust as a pattern of existential organization.   Effect carrying ether, as proposed more than 100 years ago, is not.   Individual artworks are not robust – they appear different to any different perspective. Color as commonly described is not robust.   Wavelength is.

While much of mathematics is highly robust or rather describes very robust patterns it is not the most robust pattern of patterns of all. We do not and likely won’t ever know the most robust pattern of all but we do have a framework for identifying and exploiting patterns more and more efficiently – COMPUTATION.

Computation, by itself. 

What is computation?

It has meant many things over the last 150 years.   Here defined it is simply patterns interacting with other patterns.   By that definition it probably seems like a bit of a cheat to define the most robust pattern of patterns we’ve found to be patterns interacting with other patterns. However, it cannot be otherwise. Only a completely non-reductive concept would fit the necessity of robustness.   The nuance of computation is that there are more or less universal computations.   The ultimate robust pattern of patterns would be a truly universal-universal computer that could compute anything, not just what is computable.   The real numbers are not computable, the integers are.   A “universal computer” described by today’s computer science is a program/computer that can compute all computable things. So a universal computer can compute the integers but cannot compute the real numbers (pi, e, square root of 2). We can prove this and have (the halting problem, incompleteness, set theory….).   So we’re not at a completely loss of interpreting patterns of real numbers (irrational numbers in particular). We can and do compute with pi and e and square root millions of times a second.   In fact, this is the key point.   Computation, as informed by mathematics, allows us to identify and exploit patterns far more than any other apparatus humans have devised.   However, as one would expect, the universe itself computes and computes itself.   It also has no problem identifying and exploiting patterns of all infinitude of types.

Universal Computation

So is the universe using different computation than we are? Yes and no.   We haven’t discovered all the techniques of computation at play. We never will – it’s a deep well and new approaches are created constantly by the universe. But we now have unlocked the strange loopiness of it all.   We have uncovered Turing machines and other abstractions that allow us to use English-like constructs to write programs that get translated into bits for logic gates in parallel to compute and generate solutions to math problems, create visualizations, search endless data, write other programs, produce self replicating machines, figure out interesting 3D printer designs, simulate markets, generate virtual and mixed realities and anything else we or the machines think up.

What lies beneath this all though is this very abstract yet simple concept of networks.   Nodes and edges. The mathematics and algorithms of networks.   Pure relation between things. Out of the simple connection of things from things arise all the other phenomena we experience.   The network is limitless – it imposes no guardrails to what can or can’t happen. That it is a network does explain and impose why all possibilities exhibit as they do and the relative emergent levels of phenomena and experience.

The computation of pure relation is ideal.   It only supersedes (makes sense to really consider) the value of reductionist modes of analysis, creation and pattern processing when the alternative pattern processing is not sufficient in accuracy and/or has become sufficiently inefficient to provide relative value for it’s reduction.   That is, a model of the world or a given situation is only as value as it doesn’t overly sacrifice accuracy too much for efficiency.   It turns out for most day to day situations Newtonian physics suffices.

What Next

we’ve arrived at a point in discovery and creation where the machines and machine-human-earth combinations are venturing into virtual, mixed and alternate realities that current typical modes of investigation (pattern recognition and exploitation) are not sufficient. The large hadron collider is an example and less an extreme example than it was before. The patterns we want to understand and exploit – the quantum and the near the speed of light and the unimaginably large (the entire web index with self driving cars etc) – are of such a different magnitude and kind.   Then when we’ve barely scratched the surface there we get holograms and mixed reality which will create it’s own web and it’s own physical systems as rich and confusing as anything we have now. Who can even keep track of the variety of culture and being and commerce and knowledge in something such as Minecraft? (and if we can’t keep track (pattern identify) how can we exploit (control, use, attach to other concepts…)?

The pace of creation and discovery will never be less in this local region of spacetime.   While it may not be our goal it is our unavoidable fate (yes we that’s a scary word) to continue to compute and have a more computational approach to existence – the identification and exploitation of patterns by other patterns seems to carry this self-reinforcing loop of recursion and the need of ever more clarifying tools of inspection that need more impressive means of inspecting themselves…   everything in existence replicates passively or actively and at a critical level/amount of interconnectivity (complexity, patterns connected to patterns) self inspection (reasoning, introspection, analysis, recursion) becomes necessary to advance to the next generation (explore exploitation strategies).

Beyond robotics and 3d printing and self-replicating and evolutionary programs the key pattern processing concept humans will need is a biological approach to reasoning about programs/computation.   Biology is a way of reasoning that attempts to classify patterns by similar behavior/configurations/features.   And in those similarities find ways to relate things (sexually=replication, metabolism=Energy processing, etc).   It is necessarily both reductionist, in its approach to categorize, and anti-reductionist in its approach to look at everything anew. Programs / computers escape our human (and theoretical) ability to understand them and yet we need some way to make progress if we, ourselves, are to persist along side them.

And So.

It’s quite possible this entire train of synthesis is a justification for my own approach to life and my existence. And this would be consistent with my above claims.   I can’t do anything about the fact that my view is entirely biased by my own existence as a pattern made of patterns of patterns all in the lineage of humans emerged from hominids and so on all the way down to whatever ignited patterns of life on earth.

I could be completely wrong. Perhaps some other way of synthesizing existence all the way up and down is right. Perhaps there’s no universal way of looking at it. Though it seems highly unlikely/very strange to me that patterns at one level or in one perspective couldn’t be analyzed abstractly and apply across and up and down.   And that the very idea itself suggests patterns of pattern synthesis is fundamental strikes me as much more sensible, useful and worth pursuing than anything else we’ve uncovered and cataloged to date.

Read Full Post »

Recently there’s been hubbub around Artificial Intelligence (AI) and our impending doom if we don’t do something about it. While it’s fun to scare each other in that horror/sci-fi movie kind of way, there isn’t much substance behind the various arguments floating about regarding AI.

The fears people generally have are about humans losing control and more specifically about an unrestrained AI exacting its own self-derived and likely world-dominating objectives on humankind and the earth. These fears aren’t unjustified in a broad sense. They simply don’t apply to AI, either the artificial nor the intelligence part. More importantly, the fears have nothing to do with AI but instead with the age-old fear that humankind might not be the point of all existence.

We do not have a functional description of intelligence, period. We have no reliable way to measure it. Sure we have IQ or other tests of “ability” and “awareness” and the Turing Test but none of this actually tell you anything about “intelligence” or what might actually be going on with things we consider intelligent. We can measure whether some entity accomplishes a goal or performs some behavior reliably or demonstrates new behavior acquisition in response to changes in its environment. But none of those things establish intelligence and the presence of an intelligent being. They certainly don’t establish something as conscious or self-aware or moral or purpose driven nor any other reified concept we choose as a sign of intelligence.

The sophisticated fear peddlers will suggest of the above that it’s just semantics and we all know what we mean in a common sense way of what intelligence is. This is simply not true. We don’t. Colleges can’t fully count on the SAT, phrenology turned out to be horribly wrong, the Turing Test isn’t poor by definition, and so on. Go ahead, do your own research on this. No one can figure out just exactly what intelligence is and how to measure it. Is it being able to navigate a particular situation really well? Is it the ability to assess the environment? Is it massive pattern matching? Is it particular to our 5 senses? One sense? Is it brain based? Is it pure abstract thinking? What exactly does it mean to be intelligent?

It’s worse for the concept of artificial. It’s simply the wrong term. Artificial comes from very old ideas about what’s natural and what’s not natural. What’s real is what the earth itself does and not what is the work of some humankind process or a machine. Artificial things are made of metals and powered by electricity and coal and involve gears and circuits. Errrrr. Wait… many things the earth makes are made of metal and their locomotion comes from electricity or the burning of fuel, like coal. In fact, humans were made by the earth/universe. The division between artificial and natural is extremely blurry and is non-existent for a reified concept like intelligence. We don’t need the term artificial nor the term intelligence. We need to know what we’re REALLY dealing with.

So here we are… being pitched a fearsome monster of AI which has zero concrete basis, no way to observe it, and zero examples of its existence as described in most discussions. But still the monster is in the closet, right?

For the sake of argument (which is all these counter-factual future predictions of doom are) let’s assume there is some other entity/set of entities that is more “intelligent” that humans. We will need to get a sense of what that would look like.

Intelligence could be loosely described by the presence of complex, nuanced behaviors exhibited in response to the environment. Specifically an entity is observed as intelligent if it responds to changing conditions (internal as well as environmental) effectively. It must recognize changes, be able to adjust to those changes, and evaluate the consequences of made changes as well as any changes the environment has made in response.

What seems to be the basis of intelligent behavior (ability to respond to complex contingencies) in humans comes from the following:

  • Genetic and Epigenetic effects/artifacts evolved from millions of years of evolutionary experiments e.g. body structures, fixed action patterns
  • Sensory perception from 5 basic senses e.g. sight, touch, etc
  • Ability to pattern match in a complex nervous system e.g. neurological system, various memory systems
  • Cultural/Historical knowledge-base e.g. generationally selected knowledge trained early into a new human through child rearing, media and school
  • Plastic biochemical body capable of replication, regeneration and other anti-fragile effects e.g. stem cells, neuro-plasticity
  • more really complex stuff we have yet to uncover

Whatever AI we think spells our demise most likely will have something like above (something functionality equivalent). Right? No? While there is a possibility that there exists some completely different type of “intelligent” being what I’m suggesting is that the general form of what would be akin to “intelligent” would have these features:

  • Structure that has been selected for fitness over generations in response to its environment and overall survivability
  • Multi-modal Information perception/ingestion
  • advanced pattern recognition and storage
  • Knowledge reservoir (previously explored patterns) to pull from that reduces the need for a new entity to explore the space of all possibilities for survival
  • Resilient, plastic and replication mechanism capable of abstract replication and structural exploration

And distilling that down even more raw abstractions:

  • multi-modal information I/O
  • complex pattern matching
  • large memory
  • efficient replication with random mutation and mutation from selected patterns

What are the implementation limits of those abstracted properties? Turns out, we don’t know. It’s very murky. Consider a rock. Few of us would consider a plain old rock as intelligent. But why not? What doesn’t it fulfill in the above? Rocks adjust to their environment – think of erosion and the overall rock cycle. Their structures contain eons of selection and hold a great deal of information – think of how the environment is encoded in generational build of a rock, it’s crystal structure and so forth. Rocks have an efficient replication scheme – again, think of the rock cycle, the make of a rock being able to be adsorbed into other rocks and so forth.

Perhaps you don’t buy that a rock is intelligent. There’s nothing in my description of intelligence or the reified other definitions of intelligence that absolutely says a rock isn’t intelligent. It seems to fulfill on the basics… it just does so over timescales we’re not normally looking at. A rock won’t move through a maze very quickly or solve a math problem in our life time. I posit though it does do these things over long expanses of time. The network of rocks that form mountains and river beds and ocean bottoms and the entire earths crust and the planets of our solar system exhibit the abstract properties above quite adeptly. Again just at spacetime scales we’re not used to talking about in these types of discussions.

I could go on to give other examples such as ant colonies, mosquitoes, dolphins, pigs, The Internet and on and on. I doubt many of these examples will convince many people as the reified concept of “intelligence as something” is so deeply embedded in our anthropocentric world views.

And so I conclude those that are afraid and want others to be afraid of AI are actually afraid of things that have nothing to do with intelligence – that humans might not actually be the alpha and omega and that we are indeed very limited, fragile creatures.

The first fear from anthropocentrists is hard to dispel. It’s personal. All the science and evidence of observation of the night sky and the depths of our oceans makes it very clear humans are a small evolutionary branch of an unfathomably large universe. But to each person we all struggle with our view from within – the world literally, to ourselves, revolves around us. Our vantage point is such that from our 5 senses everything comes into us. Our first person view is such that we experience the world relative to ourselves. Our memories are made of our collections of first person experiences. Our body seems to respond from our own relation to the world.

And so the fear of an AI that could somehow spell our own “top of the food chain” position makes sense while still being unjustified. The reality is… humans aren’t the top of any chain. The sun will one day blow up and quite possibly take all humans out with it. Every day the earth spins and whirls and shakes with winds, heat, snow, quakes and without intention takes humans out in the process. And yet we don’t fear those realities like folks are trying to get us to fear AI. The key to this fear seems to be intention. We’re afraid of anything that has the INTENT, the purpose, the goal of taking humans and our own selves out the central position of existence.

And where, how, when, why would this intent arise? Out of what does any intent arise? Does this intent actually exist? We don’t really have any other data store to try to derive an answer to this other than humanity’s and our own personal experience and histories. Where has the intent to dominate or destroy come from in humans? Is that really our intent when we do it? Is it a universal intent of humankind? Is it something intrinsically tied to our make up and relation to the world? Even if the intent is present, what are its antecedents? And what of this intent? If the intent to dominate others arises in humans how are we justified in fearing its rise in other entities?

Intent is another reified concept. It doesn’t really exist or explain anything. It is a word that bottles up a bunch of different things going on. We have no more intention than the sun. We behave. We process patterns and make adjustments to our behavior – verbal and otherwise. Our strategies for survival change based on contingencies and sometimes our pattern recognition confuses information – we make false associations about what is threatening our existence or impeding our basic needs and wants (chasing things that activate our adrenaline and dopamine production…). It’s all very complex. Even our “intentions” are complex casual thickets (a concept I borrow from William Wimsatt).

In this complexity it’s all incredibly fragile. Fragile in the sense that our pattern recognition is easily fooled. Our memories are faulty. Our bodies get injured. Our brains are only so plastic. And the more or survival depends on rote knowledge the less plastic our overall machinery can be. Fragile, as refereed to here, is a very general concept about the stability of any given structure or pattern – belief system, behavioral schedule, biochemical relations, rock formations… any structure.

The fear involved in fragility and AI is really about less complex entities that are highly specialized in function, sensory scope and pattern matching ability. The monster presented is a machine or group of machines hell-bent on human subordination and destruction with the weaponry and no-fragility in its function or intent – it cannot be diverted by itself nor an outside force.
OR

We fear the accidental AI. The AI that accidentally happens into human destruction as a goal.

In both cases it is not intelligence we fear but instead simple exploitation of our own fragility.

And yet, as appealing as a fear this seems to be, it is also unjustified. There’s nothing that suggests even simple entities can carry out single “purpose” goals in a complex network. The complexity of the network itself prevents total exploitation by simple strategies.

Consider the wonderful game theoretic models that consider very simple games like the Prisoner’s Dilemma and how it turns out simple Tit-For-Tat exploitation models simply do work over the long-term as domination strategies. It turns out that even in very simple situations domination and total exploitation turns out to be a poor survival strategy for the exploiter.

So domination itself becomes nuanced strategy subject to all sorts of entropy, adjustments and complexity.

Maybe this isn’t a convincing rebuttal. After-all what about a simple idea that what if someone created really simplistic AI and armed it with nuclear warheads. Certainly even a clumsy system (at X time or under Y conditions, nuke everything) armed with nukes would have the capability to destroy us all. Even this isn’t a justified fear. In the first place, it wouldn’t be anything at all AI like in any sense if it were so simple. So fearing AI is a misplaced fear. The fear is more about the capability of a nuke. Insert bio-weapons or whatever else WMD one wants to consider. In all of those cases it has nothing to do with the wielder of the weapon and it’s intelligence and everything to do about the weapon.

However, even having a total fear of WMDs is myopic. We simply do not know what the fate of humankind would be nor of the earth should some entity launch all out strategies of mass destruction. Not that we should attempt to find out but it seems a tad presumptuous for anyone to pretend to be able to know what exactly would happen at the scales of total annihilation.

Our only possible data point for what total annihilation of a species on earth might be that of dinosaurs and other ancient species. We have many theories suggesting about their mass extinction from the earth, but we don’t really know, and this extinction took a very long time, was selective and didn’t end up in total annihilation (hell, and likely lead to humanity… so…) [ see this for some info: http://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_extinction_event].

The universe resists simple explanations and that’s likely a function of the fact that the universe is not simple.

Complex behavior of adaptation and awareness is very unlikely to bring about total annihilation of humankind. A more likely threat is from simple things that accidentally exploit fragility (a comet striking the earth, blotting out the sun so that anything that requires the sun to live goes away). It’s possible we could invent and/or have invented simple machines that can create what amounts to a comet strike. And what’s there to fear even in that? Only if one believes that humanity, as it is, is the thing to protect is any fear about its annihilation by any means, “intelligent” or not, justified.

Protecting humanity as it is is a weak logical position as well because there’s really no way to draw a box around what humanity is and whether there’s some absolute end state. Worse than that, it strikes me personally, that people’s definition of humanity when promoting various fears of technology or the unknown is decidedly anti-aware and flies counter to a sense of exploration. That isn’t a logical position – it’s a choice. We can decide to explore or decide to stay as we are. (Maybe)

The argument here isn’t for an unrestrained pursuit of AI. Not at all. AI is simply a term and not something to chase or prevent unto itself – it literally, theoretically isn’t actually a thing. The argument here is for restraint through questioning and exploration. The argument is directly against fear at all. Fear is the twin of absolute belief – the confusion of pattern recognition spiraling into a steady state. A fear is only justified by an entity against an immutable pattern.

For those that fear AI, then you must, by extension, fear the evolution of human capability – and the capability of any animal or any network of rocks, etc. And the reality is… all those fears will never result in any preventative measures against destruction. Fear is stasis. Stasis typically leads to extinction – eons of evidence make this clear. Fear-peddlers are really promoting stasis and that’s the biggest threat of all.

Read Full Post »

Computing Technology enables great shifts in perspective. I’ve long thought about sharing why I love computing so much. Previously I’m not sure I could articulate it without a great deal of confusion and vagueness, or worse, zealotry. Perhaps I still can’t communicate about it well but nonetheless I feel compelled to share. 

Ultimately I believe that everything in existence is computation but here in this essay I am speaking specifically about the common computer on your lap or in your hands connected to other computers over the Internet.

I wish I could claim that computers aren’t weapons or put to use in damaging ways or don’t destroy the natural environment. They can and are used for all sorts of destructive purposes. However I tend to see more creation than suffering coming from computers.

The essential facets of computing that gives it so much creative power are interactive programs and simulation. With a computer bits can be formed and reformed without a lot of production or process overhead. It often feels like there’s an endless reservoir of raw material and an infinite toolbox (which, in reality, is true!). These raw materials can be turned into a book, a painting, a game, a website, a photo, a story, a movie, facts, opinions, ideas, symbols, unknown things and anything else we can think up. Interactive programs engage users (and other computers) in a conversation or dance. Simulation provides us all the ability to try ideas on and see how they might play out or interact with the world. All possible from a little 3-4lb slab of plastic, metal, silicon flowing with electricity. 

Connecting a computer to the Internet multiplies this creative power through sharing. While it’s true a single computer is an infinite creative toolbox the Internet is a vast, search-able, discoverable recipe box and experimentation catalog. Each of us is limited by how much time we have each day, but when we are connected to millions of others all trying out their own expressions, experiments and programs we all benefit. Being able to cull from this vast connected catalog allows us all to try, retry, reform and repost new forms that we may never have been exposed to. Remarkable.

Is there the same creative power out in the world without computers? Yes and no. A computer is probably the most fundamental tool ever discovered (maybe we could called it crafted, but I think it was discovered.) Bits of information are the most fundamental material in the multiverse. Now, DNA, proteins, atoms, etc are also fundamental or primary (think: you can build up incredible complexity from fundamental materials). The reason I give computers the edge is that for the things we prefer to make we can make within our lifetime and often in much shorter timeframes. It would take a long time for DNA and its operating material to generate the variety of forms we can produce on a computer.

Don’t get me wrong there’s an infinite amount of creativity in the fundamental stuff of biology and some of it happens on much shorter than geological timescales. You could easily make the case that it would take a traditional computer probably longer than biology to produce biological forms as complex as animals. I’m not going to argue that. No do I ignore the idea that biology produced humans which produced the computer, so really biology is possibly more capable that the computer. That said, I think we’re going to discover over time that computation is really at the heart of everything, including biology and that computers as we know them probably exist in abundance out in the universe. That is, computers are NOT dependent on our particular biological history.

Getting out of the “can’t really prove these super big points about the universe” talk and into the practical – you can now get a computer with immense power for less than $200. This is a world changing reality. Computers are capable of outputing infinite creativity and can be obtained and operated for very modest means. I suspect that price will come down to virtually zero very soon. My own kids have been almost exclusively using Chromebooks for a year for all things education. It’s remarkably freeing to be able to pull up materials, jump into projects, research, create etc anywhere at anytime. Escaping the confines of a particular space and time to learn and work with tools has the great side effect of encouraging learning and work anywhere at anytime.

There are moments where I get slight pangs of regret about my choices to become proficient in computing (programming, designing, operating, administrating). There are romantic notions of being a master painter or pianist or mathematician and never touching another computing device again. In the end I still choose to master computing because of just how much it opens me up creatively. Almost everything I’ve been able to provide for my family and friends has come from having a joyous relationship with computing.

Most excitingly to me… the more I master computers the more I see the infinitude of things I don’t know and just how vast computing creativity can be. There aren’t a lot of things in the world that have that effect on me. Maybe I’m simply not paying attention to other things but I strongly suspect it’s actually some fundamental property of computing – There’s Always More.  

Read Full Post »

In Defense of The Question Is The Thing

I’ve oft been accused of being all vision with little to no practical finishing capability. That is, people see me as a philosopher not a doer. Perhaps a defense of myself and philosophy/approach isn’t necessary and the world is fine to have tacticians and philosophers and no one is very much put off by this.

I am not satisfied. The usual notion of doing and what is done and what constitutes application is misguided and misunderstood.

The universe is determined yet unpredictable (see complexity theory, cellular automota). Everything that happens and is has anticedents (see behaviorism, computation, physics). Initiatial conditions have dramatic effect on system behavior over time (see chaos theory). These three statements are roughly equivalent or at least very tightly related. And they form the basis of my defense of what it means to do.

“Now I’m not antiperformance, but I find it very precarious for a culture only to be able to measure performance and never be able to credit the questions themselves.” – Robert Irwin, page 90, seeing is forgetting the name of thing one sees

The Question Is The Thing! And by The Question that means the context or the situation or the environment or the purpose. and I don’t mean The Question or purpose as assigned by some absolute authority agent. It is the sense of a particular or relevative instance we consider a question. What is the question at hand?

Identifying and really asking the question at hand drives the activity to and fro. To do is to ask. The very act of seriously asking a question delivers the do, the completion. So what people mistake in me as “vision” is really an insatiable curiousity and need to ask the right question. To do without the question is nothing, it’s directionless motion and random walk. To seriously ask a question every detail of the context is important. To begin answering the question requires the environment to be staged and the materials provided for answers to emerge.

There is no real completion without a constant re-asking of the question. Does this answer the question? Did that answer the question?

So bring it to something a lot of people associate me with: web and software development. In the traditional sense I haven’t written a tremendous amount of code myself. Sure I’ve shipped lots of pet projects, chunks of enterprise systems, scripts here and there, and the occassional well crafted app and large scale system. There’s a view though that unless you wrote every line of code or contributed some brilliant algorithm line for line, you haven’t done anything. The fact is there’s a ton of code written every day on this planet and very little of it would i consider “doing something”. Most of it lacks a question, it’s not asking a question, a real, big, juicy, ambitious question.

Asking the question in software development requires setting the entire environment up to answer it. Literally the configuration of programmer desks, designer tools, lighting, communication cadence, resources, mixing styles and on and on. I do by asking the question and configuring the environment. The act of shipping software takes care of itself if the right question is seriously asked within an environment that let’s answers emerge.

Great questions tend to take the shape of How Does This Really Change the World for the User? What new capability does this give the world? How does this extend the ability of a user to X? What is the user trying to do in the world?

Great environments to birth answers are varied and don’t stay static. The tools, the materials all need to change per the unique nature of the question.

Often the question begs us to create less. Write less code. Tear code out. Leave things alone. Let time pass. Write documentation. Do anything but add more stuff that stuffs the answers further back.

The question and emergent answers aren’t timeless or stuck in time. The context changes the question or shape of the question may change.

Is this to say I’m anti shipping (or anti performance as Irwin put it)? No. Lets put it this way we move too much and ask too little and actual don’t change the world that much. Do the least amount to affect the most is more of what I think is the approach.

The question is The Thing much more than thing that results from work. The question has all the power. It starts and ends there.

Read Full Post »

The aim of most businesses is to create wealth for those working at it. Generally it is preferred to do this in a sustainable, scalable fashion so that wealth may continue to be generated for a long time. The specific methods may involve seeking public valuation in the markets, selling more and more product directly profitably, private valuation and investment and more. The aim of most technology based companies to make the primary activity and product of the business involve technology. Most common understanding of the “technology” refers to information technology, bio technology, advanced hardware and so forth – i.e. tools or methods that go beyond long established ways of doing things and/or analog approaches. So the aims of a technology company are to create and maintain sustainable, scalable wealth generation through technological invention and execution.

Perhaps there are better definitions of terms and clearer articulation of the aims of business but this will suffice to draw out an argument for how technology companies could fully embrace the idea of a platform and, specifically, a technological platform. Too often the technology in a technology company exists solely in the end product sold to the market. It is a rare technology company that embraces technological thinking every where – re: big internet media still managing advertising contracts through paper and faxes, expense reports through stapled papers to static excel spreadsheets and so on. There are even “search” engine companies that are unable to search over all of their own internal documentation and knowledge.

The gains of technology are significant when applied everywhere in a company. A technological product produced by primitive and inefficient means is usually unable to sustain its competitive edge as those with technology in their veins quickly catch up to any early leads by a first, non technical mover. Often what the world sees on the outside of a technology company is The Wizard of Oz. A clever and powerful façade of technology – a vision of smoking machines doing unthinkable things. When in reality it is the clunky, hub bub of a duct taped factory of humans pulling levers and making machine noises. If the end result is the same, who cares? No one – if the result can be maintained. It never scales to grow the human factory of tech facade making. Nor does it scale to turn everything over to the machines.

What’s contemplated here is a clever and emergent interaction of human and machine technology and how a company goes from merely using technology to becoming a platform. Consider an example of a company that produces exquisite financial market analysis to major brokerage firms. It may be that human analysts are far better than algorithms at making the brilliant and challenging pattern recognition observations about an upcoming swing in the markets. There is still a technology to employ here. Such a company should supply the human analysts with as much enhancing tools and methods to increase the rate at which human analysts can spot patterns, reduce the cost in spreading the knowledge where it needs to go and to complete the feedback loop on hits and misses. There is no limit to how deeply a company should look at enhancing the humans ability. For instance, how many keystrokes does it take for the analyst to key in their findings? How many hops does a synthesized report go through before hitting the end recipient? how does the temperature of the working space impact pattern recognition ability? Perhaps all those details are far more of an impact to the sustainable profit than tuning a minute facet in some analytic algorithm.

The point here is that there should be no facet of a business left untouched by technology enhancement. Too often technology companies waste millions upon millions of dollars updating their main technology product only to see modest or no gain at all. The most successful technology companies of the last 25 years have all found efficiencies through technology mostly unseen by end users and these become their competitive advantages. Dell – ordering and build process. Microsoft – product pre-installations. Google – efficient power sources for data centers. Facebook – rapid internal code releases. Apple – very efficient supply chain. Walmart – intelligent restocking. Amazon – everything beyond the core “ecommerce”.

In a sense, these companies recognized their underlying ”platform” soon after recognizing their main value proposition. They learned quickly enough to scale that proposition – and to spend a solid blend of energy on the scale and the product innovation. A quick aside – scale here is taken to mean how efficiently a business can provide its core proposition to the widest, deepest customer base. It does not refer solely to hardware or supply chain infrastructure, though often that is a critical part of it.

One of many interesting examples of such platform thinking is the Coors Brewing company back in its hey day. Most people would not consider Coors a “technology” company. In the 1950s though it changed many “industries” with the introduction of the modern aluminum can. This non-beer related technology reduced the cost of operations, created a recycling sub industry, reduced the problem of tin-cans damaging the beers taste and so on. It also made it challenging on several competitors to compete on distribution, taste and production costs. This isn’t the first time the Coors company put technology to use in surprising ways. They used to build and operate their own powerplants to reduce reliance on non optimal resources and to have better control over their production.

Examples like this abound. One might conclude that any company delivery product at scale can be classified as a technology company – they all will have a significant platform orientation. However, this does not make them a platform company.

What distinguishes a platform technology from simply a technology company is one in which the platform is provided to partners and customers to scale their businesses as well. These are the types of companies where their product itself becomes scale. These are the rare, super valuable companies. Google, Apple, Intel, Facebook, Microsoft, Salesforce.com, Amazon and so on. These companies often start by becoming highly efficient technically in the production of their core offering and then turn that scale and license it to others. The value generation gets attributed to the scale provider appropriately in that it becomes a self realizing cycle. The ecosystem built upon the platform of such companies demands the platform operator to continue to build their platform so they too may scale. The platform operator only scales by giving more scale innovation back to the ecosystem. Think Google producing Android, offering Google Analytics for Free and so on. Think Facebook and Open Graph and how brands rely on their facebook pages to connect and collect data. Think Amazon and its marketplace and Cloud Computing Services. Think Microsoft and the MSDN/developer resources/cloud computing. Think Apple and itunes, app store and so on.

It’s not all that easy though! There seems to come a time with all such platform companies that a critical decision must be made before it’s obvious that it’s going to work. To Open The Platform Up To Others Or Not? Will the ecosystem adopt it? How will they pay for it? Can we deal with what is created? Are we truly at scale to handle this? Are we open enough to embrace the opportunities that come out of it? Are we ready to cede control? Are we ready to create our own competitors?

That last question is the one big one. But it’s the one to embrace to be a super valuable, rare platform at the heart of a significant ecosystem. And it happens to be the way to create a path to sustainable wealth generation that isn’t a short lived parlor trick.

Read Full Post »

20120913-094556.jpg

I even used an index card.

Read Full Post »

Some people hate buzzwords, like Big Data.   I’m ok with it.  Because unlike many buzzwords it actually kind of describes exactly what it should.   It’s a world increasingly dependent on algorithmic decision making, data trails, profiles, digital finger prints, anomaly tracking… not everything we do is tracked, but enough is that it definitely exceeds our ability to process it and do genuinely useful things with it.

Now, is it because of the tools/technology that makes Big Data so challenging to businesses?   I suppose somewhat.  I think it it’s more behavioral than anything.  Humans are very good at intuitive pattern recognition.   We’re taking in Big Data every second – through our senses, working around our neural systems and so on.    We do it this without being “aware”.   With explicit Data Collection and explicit Analysis like we do in business we betray our intuitions or rather our intuition betrays us.

How so?

We often go spelunking through big data intuiting things that aren’t real.  We’re collecting so much data that it’s pretty easy to find patterns, whether they matter or not.  We’re so convinced there’s something to find there, we often Invent A Pattern.

With the ability to collect so much data our intuition tells us if we collect more data we’ll find more patterns.  Just Keep Collecting.

And then!  we have another problem.   we’re somewhat limited by our explicit training.

We’re so accustomed to certain interfaces with explicitly collected data – Spreadsheets, Relational Database GUIs, Stats programs, that we find it hard to imagine working with data in any other way.   We’re not very good at transcoding data into more useful forms and our tools weren’t really built to make that easier.   We’re now running into this “A Picture is Worth a Thousand Words” or some version of Computational Irreducibility.   Our training has taught us to go looking for shortcuts or formulas to compress Big Data into Little Formula (you know take a dataset of 18 variables and reduce it to a 2-axis chart with an up and to the right linear regression line).

Fact is, that’s just not how it works.   Sometimes Big Data needs a Big Picture cause it’s a really complicated network of interactions.  Or it needs a full simulation and so on.

Another way to put this… businesses are so accustomed to the idea of Explainability.   Businesses thrive on Business Plans, Forecasts, etc.   so they force a overly simplistic reductionist analysis of the business and drive everything against that type of plan.   Driving against that type of plan ends up shaping internal tools and products to be equally reductionist.

To get the most out of Big Data we literally have to retrain ourselves against our deepest built in approaches to data collection and analysis.   First, don’t get caught up in specific toolsets.   Re-imagine what it means to analyze data.   How can we transcode data into a different picture that illuminates real, useful patterns without reducing it to patterns we can explain?

Sometimes, the best way to do this is to give away the data to hoards and hoards of humans and see what crafty things they do with it.  Then step back and see how it all fits together.

I believe this is what Facebook has done.  Rather than analyze the graph endlessly for their own product dev efforts, they gave the graph out to others and saw what they created with it.   That has been a far more efficient, parallel processing of that data.

It’s almost like flipping the idea of data analysis and business planning on its head.   You figure out what the data “means” by seeing how people put it to use in whatever ways they like.

Read Full Post »

Older Posts »