Feeds:
Posts
Comments

Posts Tagged ‘computation’

The Point

Everything is a pattern and connected to other patterns.   The variety of struggles, wars, businesses, animal evolution, ecology, cosmological change – all are encompassed by the passive and active identification and exploitation of changes in patterns.

What is Pattern

Patterns are thought of in a variety of ways – a collection of data points, pictures, bits and bytes, tiling.   All of the common sense notions can be mapped to the abstract notion of a graph or network of nodes and their connections, edges.   It is not important, for the sake of the early points of this essay, to worry to much about the concept of a graph or network or its mathematical or epistemological construction.   The common sense ideas that might come to mind should suffice – everything is a pattern connected to other patterns. E.g. cells are connected to other cells sometimes grouped into organs connected to other organs sometimes grouped into creatures connected to other creatures.

Examples

As can be imagined the universe has a practically infinite number of methods of pattern identification and exploitation. Darwinian evolution is one such example of a passive pattern identification and exploration method. The basic idea behind it is generational variance with selection by consequences. Genetics combined with behavior within environments encompass various strategies emergent within organisms which either hinder or improve the strategies chance of survival. Broken down and perhaps too simplistically an organism (or collection of organisms or raw genetic material) must be able to identify threats, energy sources and replication opportunities and exploit these identifications better than the competition.   This is a passive process overall because the source of identification and exploitation is not built in to the pattern selected, it is emergent from the process of evolution. On the other hand sub processes within the organism (object of pattern were considering here) can be active – such as in the case of the processing of an energy source (eating and digestion and metabolism).

Other passive pattern processes include the effects of gravity on solar systems and celestial bodies on down to their effects on planetary ocean tides and other phenomena.   Here it is harder to spot what is the identification aspect?   One must abandon the Newtonian concept and focus on relativity where gravity is the name of the changes to the geometry of spacetime.   What is identified is the geometry and different phenomena exploit different aspects of the resulting geometry.   Orbits form around a sun because of the suns dominance in the effect on the geometry and the result can be exploited by planets that form with the right materials and fall into just the right orbit to be heated just right to create oceans gurgling up organisms and so on.   It is all completely passive – at least with our current notion of how life my have formed on this planet. It is not hard to imagine based on our current technology how we might create organic life forms by exploiting identified patterns of chemistry and physics.

In similar ways the trajectory of artistic movements can be painted within this patterned theory.   Painting is an active process of identifying form, light, composition, materials and exploiting their interplay to represent, misrepresent or simply present pattern.   The art market is an active process of identifying valuable concepts or artists or ideas and exploiting them before mimicry or other processes over exploit them until the value of novelty or prestige is nullified.

Language and linguistics are the identification and exploitations of symbols (sounds, letters, words, grammars) that carry meaning (the meaning being built up through association (pattern matching) to other patterns in the world (behavior, reinforcers, etc).   Religion, by the organizers, is the active identification and exploitation of imagery, language, story, tradition, and habits that maintain devotional and evangelical patterns. Religion, by the practitioner, can be active and passive maintenance of those patterns. Business and commerce is the active (sometimes passive) identification and exploitation of efficient and inefficient patterns of resource availability, behavior and rules (asset movement, current social values, natural resources, laws, communication medium, etc).

There is not a category of inquiry or phenomena that can escape this analysis.   Not because the analysis is so comprehensive but because pattern is all there is. Even the definition and articulation of this pattern theory is simply a pattern itself which only carries meaning (and value) because of the connection to other patterns (linear literary form, English, grammar, word processing programs, blogging, the Web, dictionaries).

Mathematics and Computation

It should be of little surprise that mathematics and computation forms the basis of so much of our experience now.   If pattern is everything and all patterns are in a competition it does make some common sense that efficient pattern translation and processing would arise as a dominant concept, at least in some localized regions of existence.

Mathematics effectiveness in a variety of situations/contexts (pattern processing) is likely tied to its more general, albeit often obtuse and very abstracted, ability to identify and exploit patterns across a great deal of categories.   And yet, we’ve found that mathematics is likely NOT THE END GAME. As if anything could be the end game.   Mathematics’ own generalness (which we could read as reductionist and lack of full fidelity of patterns) does it in – the proof of incompleteness showed that mathematics itself is a pattern of patterns that cannot encode all patterns. Said differently – mathematics incompleteness necessarily means that some patterns cannot be discovered nor encoded by the process of mathematics.   This is not a hard meta-physical concept. Incompleteness merely means that even for formal systems such as regular old arithmetic there are statements (theorems) where the logical truth or falsity cannot be established. Proofs are also patterns to be identified and exploited (is this not what pure mathematics is!) and yet we know, because of proof, that we will always have patterns, called theorems, that will not have a proof.   Lacking a proof for a theorem doesn’t mean we can’t use the theorem, it just means we can’t count on the theorem to prove another theorem. i.e. we won’t be doing mathematics with it.   It is still a pattern, like any sentence or painting or concept.

Robustness

The effectiveness of mathematics is its ROBUSTNESS. Robustness (a term I borrow from William Wimsatt) is the feature of a pattern that when it is processed from multiple other perspectives (patterns) the inspected pattern maintains its overall shape.   Some patterns maintain their shape only within a single or limited perspective – all second order and higher effects are like this. That is, anything that isn’t fundamental is of some order of magnitude less robust that things that are.   Spacetime geometry seems to be highly robust as a pattern of existential organization.   Effect carrying ether, as proposed more than 100 years ago, is not.   Individual artworks are not robust – they appear different to any different perspective. Color as commonly described is not robust.   Wavelength is.

While much of mathematics is highly robust or rather describes very robust patterns it is not the most robust pattern of patterns of all. We do not and likely won’t ever know the most robust pattern of all but we do have a framework for identifying and exploiting patterns more and more efficiently – COMPUTATION.

Computation, by itself. 

What is computation?

It has meant many things over the last 150 years.   Here defined it is simply patterns interacting with other patterns.   By that definition it probably seems like a bit of a cheat to define the most robust pattern of patterns we’ve found to be patterns interacting with other patterns. However, it cannot be otherwise. Only a completely non-reductive concept would fit the necessity of robustness.   The nuance of computation is that there are more or less universal computations.   The ultimate robust pattern of patterns would be a truly universal-universal computer that could compute anything, not just what is computable.   The real numbers are not computable, the integers are.   A “universal computer” described by today’s computer science is a program/computer that can compute all computable things. So a universal computer can compute the integers but cannot compute the real numbers (pi, e, square root of 2). We can prove this and have (the halting problem, incompleteness, set theory….).   So we’re not at a completely loss of interpreting patterns of real numbers (irrational numbers in particular). We can and do compute with pi and e and square root millions of times a second.   In fact, this is the key point.   Computation, as informed by mathematics, allows us to identify and exploit patterns far more than any other apparatus humans have devised.   However, as one would expect, the universe itself computes and computes itself.   It also has no problem identifying and exploiting patterns of all infinitude of types.

Universal Computation

So is the universe using different computation than we are? Yes and no.   We haven’t discovered all the techniques of computation at play. We never will – it’s a deep well and new approaches are created constantly by the universe. But we now have unlocked the strange loopiness of it all.   We have uncovered Turing machines and other abstractions that allow us to use English-like constructs to write programs that get translated into bits for logic gates in parallel to compute and generate solutions to math problems, create visualizations, search endless data, write other programs, produce self replicating machines, figure out interesting 3D printer designs, simulate markets, generate virtual and mixed realities and anything else we or the machines think up.

What lies beneath this all though is this very abstract yet simple concept of networks.   Nodes and edges. The mathematics and algorithms of networks.   Pure relation between things. Out of the simple connection of things from things arise all the other phenomena we experience.   The network is limitless – it imposes no guardrails to what can or can’t happen. That it is a network does explain and impose why all possibilities exhibit as they do and the relative emergent levels of phenomena and experience.

The computation of pure relation is ideal.   It only supersedes (makes sense to really consider) the value of reductionist modes of analysis, creation and pattern processing when the alternative pattern processing is not sufficient in accuracy and/or has become sufficiently inefficient to provide relative value for it’s reduction.   That is, a model of the world or a given situation is only as value as it doesn’t overly sacrifice accuracy too much for efficiency.   It turns out for most day to day situations Newtonian physics suffices.

What Next

we’ve arrived at a point in discovery and creation where the machines and machine-human-earth combinations are venturing into virtual, mixed and alternate realities that current typical modes of investigation (pattern recognition and exploitation) are not sufficient. The large hadron collider is an example and less an extreme example than it was before. The patterns we want to understand and exploit – the quantum and the near the speed of light and the unimaginably large (the entire web index with self driving cars etc) – are of such a different magnitude and kind.   Then when we’ve barely scratched the surface there we get holograms and mixed reality which will create it’s own web and it’s own physical systems as rich and confusing as anything we have now. Who can even keep track of the variety of culture and being and commerce and knowledge in something such as Minecraft? (and if we can’t keep track (pattern identify) how can we exploit (control, use, attach to other concepts…)?

The pace of creation and discovery will never be less in this local region of spacetime.   While it may not be our goal it is our unavoidable fate (yes we that’s a scary word) to continue to compute and have a more computational approach to existence – the identification and exploitation of patterns by other patterns seems to carry this self-reinforcing loop of recursion and the need of ever more clarifying tools of inspection that need more impressive means of inspecting themselves…   everything in existence replicates passively or actively and at a critical level/amount of interconnectivity (complexity, patterns connected to patterns) self inspection (reasoning, introspection, analysis, recursion) becomes necessary to advance to the next generation (explore exploitation strategies).

Beyond robotics and 3d printing and self-replicating and evolutionary programs the key pattern processing concept humans will need is a biological approach to reasoning about programs/computation.   Biology is a way of reasoning that attempts to classify patterns by similar behavior/configurations/features.   And in those similarities find ways to relate things (sexually=replication, metabolism=Energy processing, etc).   It is necessarily both reductionist, in its approach to categorize, and anti-reductionist in its approach to look at everything anew. Programs / computers escape our human (and theoretical) ability to understand them and yet we need some way to make progress if we, ourselves, are to persist along side them.

And So.

It’s quite possible this entire train of synthesis is a justification for my own approach to life and my existence. And this would be consistent with my above claims.   I can’t do anything about the fact that my view is entirely biased by my own existence as a pattern made of patterns of patterns all in the lineage of humans emerged from hominids and so on all the way down to whatever ignited patterns of life on earth.

I could be completely wrong. Perhaps some other way of synthesizing existence all the way up and down is right. Perhaps there’s no universal way of looking at it. Though it seems highly unlikely/very strange to me that patterns at one level or in one perspective couldn’t be analyzed abstractly and apply across and up and down.   And that the very idea itself suggests patterns of pattern synthesis is fundamental strikes me as much more sensible, useful and worth pursuing than anything else we’ve uncovered and cataloged to date.

Read Full Post »

A variety of thinkers and resources seem to converge on some fundamental ideas around existence, knowledge, perception, learning and computation.   (Perhaps I have a confirmation bias and have only found what I was primed to find).

 

Kurt Godel articulated and proved what I believe to be the most fundamental idea of all, the Incompleteness Theorem.   This theorem along with analog variants in the Halting Problem and other aspects of complexity theory provides us the notion that there is a formal limit to what we can know.   And by “to know” I mean it in the Leibnizen sense of perfect knowledge (scientific fact with logical proof, total knowledge).   Incompleteness tells us even with highly abstract, specialized formal systems there will always be some statement WITHIN that system that is true but cannot be proved. This is fundamental.

 

It means that no matter how much mathematical or computational or systematic logic we work out in the world there are just some statements/facts/ideas that are true but cannot be proven to be true.   As the name of the theorem suggests, though it’s mathematical meaning isn’t quite this, our effort in formalizing knowledge will remain incomplete.   There’s always something just out of reach.

 

It is also a strange fact that one can prove incompleteness of a system and yet not prove trivial statements within these incomplete formal systems.

 

Godel’s proof and approach to figuring this out is based on very clever re-encoding of formal systems laid out by Betrand Russell and A Whitehead.   This re-encoding of the symbols of math and language has been another fundamental thread we find through out human history.   One of the more modern thinkers that goes very deep into this symbolic aspect of thinking is Douglas Hofstadter, a great writer and gifted computer and cognitive scientist.   It should come as no surprise that Hofstadter found inspiration in Godel, as so many have. Hofstadter has spent a great many words on the idea of strange loops/self-reference and re-encodings of self-referential systems/ideas.

 

But before the 20th century Leibniz and many other philosophical, artistic, and mathematical thinkers had already started laying the groundwork around the idea that thinking (and computation) is a building up of symbols and associations between symbols.   Of course, probably most famously was Descartes in coining “I think, therefore I am.”   This is a deliciously self-referential, symbolic expression that you could spend centuries on. (and we have!)

 

Art’s “progression” has shown that we do indeed tend to express ourselves symbolically. It was only in more modern times when “abstract art” became popular that artist began to specifically avoid overt representation via more or less realistic symbols.   Though this obsession with abstraction turns out to be damn near impossible to pull off, as Robert Irwin from 1960 on demonstrated with his conditional art.   In his more prominent works he did almost the minimal gesture to an environment (a wall, room, canvas) and found that almost no matter what, human perception still sought and found symbols within the slightest gesture.   He continues to this day to produce conditional art that seeks to have pure perception without symbolic overtones at the core of what he does. Finding that it’s impossible seems, to me, to be line with Godel and Leibniz and so many other thinkers.

 

Wittgenstein is probably the most extreme example of finding that we simply can’t make sense of many things, really, in a philosophical or logical sense by saying or writing ideas.   Literally “one must be silent.”   This is a very crude reading and interpretation of Wittgenstein and not necessarily a thread he carries throughout his works but again it strikes me as being in line with the idea of incompleteness and certainly in line with Robert Irwin. Irwin, again no surprise, spent a good deal time studying Wittgenstein and even composed many thoughts about where he agreed or disagreed with Wittgenstein.   My personal interpretation is that Irwin has done a very good empirical job of demonstrating a lot of Wittgensteinien ideas. Whether that certifies any of it as the truth is an open question. Though I would argue that saying/writing things is also symbolic and picture-driven so I don’t think there’s as clear a line as Wittgenstein drew.   As an example, Tupper’s Formula is an insanely loopy mathematical function that draws a graph of itself.

 

Wolfram brings us a more modern slant in the Principle of Computational Irreducibility.   Basically it’s the idea that any system with more than very simple behavior is not reducible to some theory, formula or program that can predict it. The best we could do in trying to fully know a complex system is to watch it evolve in all its aspects.   This is sort of a reformulation of the halting problem in such a way that we might more easily imagine other systems beholden to this reality.   The odd facet of such a principle is that one cannot really prove with any reliability which systems are computational irreducible.   (P vs NP, etc problems in computer science are akin to this).

 

Chaitin, C. Shannon, Aaronson, Philip Glass, Max Richter, Brian Eno and many others also link into this train of thought….

 

Why do I think these threads of thought above (and many others I omit right now) matter at all?

 

Nothing less than everything.   The incompleteness or irreducibility or undecidability of complex systems (and even seemingly very simple things are often far more complex than we imagine!) is the fundamental feature of existence that suggests why, when there is something, there’s something rather than nothing. For there to be ANYTHING there must be something outside of full description. This is the struggle.   If existence were reducible to a full description there would be no end to that reduction until there literally was nothing.

 

Weirder, perhaps still, is the idea is the Principal of Computational Equivalence and Computational Universality.   Basically any system that can compute universally can emulate any other universal computer.   There are metaphysical implications here that if I’m being incredibly brash suggest that anything complex enough can and/is effectively anything else that is complex.   Again tied to the previous paragraph of thought I suggest that if there’s anything at all, everything is everything else.   This is NOT an original thought nor is it as easily dismissed as whacky weirdo thinking.   (Here’s a biological account of this thinking from someone that isn’t an old dead philosopher…)

 

On a more pragmatic level I believe the consequences of irreducibility suggest why computers and animals (any complex systems) learn the way they learn.   Because there is no possible way to have perfect knowledge complex systems can only learn based on versions of Probably Approximately Correct (Operant Conditioning, Neural Networks, Supervised Learning, etc are all analytic and/or empirical models of learning that suggest complex systems learn through associations rather than executing systematic, formalized, complete knowledge)   Our use of symbolics to think is a result of irreducibility.   Lacking infinite energy to chase the irreducible, symbolics (probably approximately correct representations) must be used by complex systems to learn anything at all.   (this essay is NOT a proof of this, this is just some thoughts, unoriginal ones, that I’m putting out to prime myself to actually draw out empirical or theoretical evidence that this is right…)

 

A final implication to draw out is that of languages and specifically of computer languages.   To solve ever more interesting and useful problems and acquire more knowledge (of an endless growing reservoir of knowledge) our computer languages (languages of thought) must become more and more rich symbolically.   Our computers, while we already make them emulate our more rich symbolic thinking, need to have symbolics more deeply embedded in their basic operations.   This is already the trend in all these large clusters powering the internet and the most popular software.

 

As a delightful concluding, yet open unoriginal thought from this book by Flusser comes to mind…   Does Writing Have a Future suggests that ever more rich symbolics than the centuries old mode of writing and reading will not only be desired but inevitable as we attempt to communicate in more vast networks. (which, won’t surprising, is very self-referential if you extend the thought to an idea of “computing with pictures” which really isn’t different than computing with words or other representations of bits that represent other representation of bits…)   I suppose all of this comes down to seeing which symbolic prove to be more efficient in the total scope of computation.   And whatever interpretation we assign to efficient is, by the very theme of this essay, at best, an approximation.

Read Full Post »

Time equals money is a truism. It is true in the sense that both concepts are simply agreements between people in relation to something else. In the case of time it is an agreement between people about clocks or other cyclic mechanisms and usually in relation to synchronizing activities. In the case of money it is the agreement of credit and debts as representations of trustworthiness. The day, minute, hour, and second are merely efficient conventions we use to compress information about synchronizing our activities in relation to other things. Dollars, cents, bitcoins and notes are all conventions we use to liquidate and exchange trust.

In fact, there’s nothing in human existence that isn’t in this same class of concepts like time and money other than food, water, shelter, sleep and reproduction. All of our cultural conventions and social constructs are, at the root, based upon the need to survive. All of our social existence is derived from a value association network built up to help us obtain the basic necessities of our personal survival. This value association network can become quite complicated and certainly extends beyond our own individual lifespan and influence. We have traditions, works of literature and art, history, religion, politics and so on all due to an extremely complex evolution of learned associative and genetic strategies for survival of our individual genes.

The goal in this essay is not to reduce everything we experience to survival of genes and suggest anthropology, social sciences, psychology and so forth aren’t worth investigation. The various complex systems investigated in all these disciplines exist and emerge as stand alone things to study and figure out. Because politics and economies and social networks actually do exist we must study them and understand their effects and causes. Also we cannot effectively research all the way down from these emergent concepts to the fundamentals for a variety of reasons, not least of which is simply computational irreducibility.

Computation irreducibility, the principal (unproven), suggests the best we are going to be able to do to understand EVERYTHING is just to keep computing and observing. Everything is unfolding in front of us and it’s “ahead” of us in ways that aren’t compressible. This suggests, to me, that our best source of figuring things out is to CREATE. Let things evolve and because we created them we understand exactly what went into them and after we’re dead we will have machines we made that can also understand what went into them.

In a sense there’s only so much behavior (evolution of information) we can observe with the current resources available to us. We need to set forth new creations that evolve in our lifetimes (genetic and computational lifetimes). Let us see if cultures and social structures and politics and money evolve from our creations!

However, until that’s more feasible than it is now we have history and anthropology and sociology….. and yet! While new patterns emerge at various levels of reduction often these emergent patterns will share common abstract structures and behavior. For example, the Fibonacci sequences shows up in a variety of levels of abstract patterns. Another example is that of fractal behavior in economic markets, in the growth of trees and obviously within various computational systems. It is this remarkable phenomenon that leads to my forthcoming hypothesis.

The fundamental aspect of existence is information.

Bits. Bits interacting with bits to form, deform and reform patterns. These patterns able to interpret, reinterpret and replicate. These patterns can be interpreted as networks. Networks, described by bits, made of bits, able to understand streams of bits.

[for understandable examples of this in everyday life think of your computer you’re reading this on. It is made of atoms (bits) and materials like silicon (bits of bits) fashioned into chips and memory banks (a network of bits of bits that process and store bits) that understand programs (bits about other bits) and interact with humans (who type bits from their own networked being [fingers, brains, eyes…]).]

Information has no end and no beginning. It doesn’t need a physical substrate, as in a particular substrate. It becomes the substrate. It substantiates all substrates. Anything and everything that exists follows the structures of pure information and pure computation – our physical world is simply a subset of this pure abstraction.

These high level – or what we call high level – phenomenon like social networks and politics and economies all are phenomena of information and information processing. The theories of Claude Shannon, Kurt Godel, Church, Turing, Chatin, Mandelbrot, Wolfram and so on all show signs that at all levels of “how things work” there is a fundamental information-theoretic basis.

A strange thing is happening nowadays. Well, strange to many who grew up working the land and manipulating the world directly with their own hands… The majority of “advanced” societies are going digital. Digital refers to very clearly not the stuff taken directly from the ground on the earth ( I don’t mean digital in the sense of digital vs. analog… continuous vs. discrete). The economies are 90%+ digital, the majority of the most valuable companies don’t produce physical products, the politics are digital, the dominant mode of communication and social interaction is digital and so forth. It’s almost impossible at this point to think our existence will end up as anything but informatic. But it’s a bit misleading to think we’re moving away from one mode into another. The fact is it’s ALL INFORMATION and we’re just arguing about the representation in physical form of that information.

So what does any of the last set of paragraphs have to do with the opening? Well, everything. Time and money are simply exchanges of information. We will find traces of their basic ideas (synchronization and “trust”) in all sorts of complex information exchanges. Time and money are compressions of information that allow us finite, yet universal computers to do things mostly within our computational lifetime. Time and money are NOT fundamental objects in existence. They are emergent abstractions, that will emerge EVERYTIME sufficiently complex information structures start interacting and assuredly develop associative value networks.

Are they real? Sure.

Should we obsess over them? It all depends on what you, as an information packet, learn to value. If you basic means of survival as the information packet you are depends on the various associations they provide, then yes. If not, then no. Or perhaps very differently than you deal with them today.

Read Full Post »

There is truth.   Truth exists.  There is a truth to this existence, this universe.   We might lack the language or the pictorial tools or the right theory and models, but there is truth.

What is this truth?  what is truth?

Things exist, we exist, there is a speed of light, the square root of two is irrational, the halting problem is undecidable, there are abstract and real relations between abstract and real things.

The truth is a something that, yes, has a correspondence to the facts.  That is not the end of it though (despite the pragmatic claims of some!).   The truth has a correspondence to the facts because it is true!   The facts HAVE to line up against a truth.   The truth exists outside of specific events or objects.   A number has an existence, if even only as an idea, and it has relations to other things.  And the description of that number and those relations ARE truth.  A computer program has its truth whether you run the program or not.  If you were to run it it would halt or not halt, that potential is in the computer program from the beginning, it doesn’t arise from it’s execution.

On Proof and not Proof but Use

We can prove these truths and many more.  We can prove through English words or through mathematical symbolism or computer programs.   Our proofs, put into these formats, can and are often wrong and set to be revised over and over until there are no holes.   No matter how fragile a proof and the act of providing proof the truth is still not diminished.  It is still there, whether we know it or not and whether we can account for it or not.  And the truth begs proof.  It begs to be known in its fullness and to be trusted as truth to build up to other truths.

BUT!

Proof isn’t always possible – in fact we’ve learned from issues in computability and incompleteness – that complete provability of all truth is impossible.   This beautiful truth itself further ensures that the truth will always beckon us and will never be extinguished through an endless assault.  There is always more to learn.

The unprovable truths we can still know and use.  We can use them without knowing they are true.  We do this all the time, all day long.   How many of us know the truth of how physics works? or how are computers do what they do?   and does that prevent their use – the implementation of that truth towards more truth?

Why?

Why defend truth?  Why publish an essay exalting truth and championing the search for truth? Does the truth need such a defense?

Being creatures with intelligence – that is, senses and a nervous system capable of advanced pattern recognition – our ultimate survival depends on figuring out what’s true and what isn’t.   If too many vessels (people!) for the gene code chase falsehoods the gene code isn’t likely to survive too many generations.   Life, and existence itself, depends on the conflict between entropy and shape, chaos and order, stillness and motion, signal and noise.  The truth is the abstract idea that arises from this conflict and life is the real, tangible thing born from that truth.  We learn truths – which processing of this thing into that thing that keep us alive, we live to learn these things. In a completely entropic existence there is nothing.   Without motion there is nothing.   In total chaos there is nothing.   It is the slightest change towards shape, order and signal that we find the seeds of truth and the whole truth itself.  The shaping of entropy is the truth.   Life is embodiment of truth forming.

So I can’t avoid defending the truth.  I’m defending life.  My life.  In defending it, I’m living it.  And you, in whatever ways you live, are defending the truth and your relation to other things.  If I’m alive I must seek and promote truth.   While death isn’t false, chasing falsehood leads to death or rather non existence.   Could there ever be truth to a statement like “I live falsely” or “I sought the false.”   There’s nothing to seek.  Falsehood is easy, it’s everywhere.  It’s everything that isn’t the truth.  To seek it is to exert no effort (to never grow) and to never gain – falsity has no value.  Living means growing, growing requires effort, only the truth, learning of the truth demands effort.

How do we best express and ask about truth?

There’s a great deal of literature on the unreasonable effectiveness of mathematics to describe the world.  There’s also a great deal of literature, and growing by the day, suggesting that mathematics isn’t the language of the way the universe works.   Both views I find to be rather limited.   Mathematics and doing math is about certain rigor in describing things and their relations.   It’s about forming and reforming ways to observe and question ideas, objectives, motion, features…. It’s about drawing a complete picture and all the reasons it should and shouldn’t be so.   Being this way, this wonderful thing we call mathematics, there is no way mathematics couldn’t be effective at truth expression.   Ok, for those that want to nit pick, I put “computation” in with mathematics.  Describing (writing) computer programs and talking about their features and functions and observing their behavior is doing math, it is mathematics.

Art has very similar qualities.   Art doesn’t reduce beyond what should be reduced.   It is the thing itself.  It asks questions by shifting perspectives and patterns.  It produces struggle.  Math and art are extremely hard to separate when done to their fullest.  Both completely ask the question and refuse to leave it at that.   Both have aspects of immediate impression but also have a very subtle slow reveal.  Both require both the artist and the audience, the mathematician and the student – there is a tangible, necessary part of the truth that comes directly from the interaction between the parties, not simply the artifacts or results themselves.

Other ways of expressing and thinking are valuable and interesting.  That is, biology and sociology and political science, and so on….. these are all extremely practical implementations or executions of sub aspects of the truth and truth expression.  They are NOT the most fundamental nor the most fruitful overall.   Practiced poorly and they lead to falsehoods or at best mild distractions from the truth.  Practiced well and they very much improve the mathematics and art we do.

What does any of this get us?  What value is there in this essay?

This I cannot claim anything more about than what I have above.   For example, I don’t know how to specifically tell someone that the truth of square root of 2 is irrational has x,y,z value to them.  It certainly led to a fruitful exploration and exposition of a great deal of logic and mathematical thinking that led to computation and and and.   But that doesn’t even come close to explaining value or what talking about its value today, in this essay, matters.

My only claim would be that truth matters and if there is any truth in this essay then this essay matters.  How that matter comes to fruition I don’t know.   That it comes to any more fruition than my pounding out this essay after synthesizing many a conversation and many books on the subject and writing some computer programs and doing math is probably just a very nice consequence.

The truth’s purpose is itself, that it is true.

Read Full Post »

There’s a great deal of confusion about what is meant by the concept “computational knowledge.”

Stephen Wolfram put out a nice blog post on the question for computable knowledge.  In the beginning he loosely defines the concept:

So what do I mean by “computable knowledge”? There’s pure knowledge—in a sense just facts we know. And then there’s computable knowledge. Things we can work out—compute—somehow. Somehow we have to organize—systematize—knowledge to the point that we can build on it—compute from it. And we have to know methods and models for the world that let us do that computation.

Knowledge

Trying to define it any more rigorously than above is somewhat dubious.  Let’s dissect the concept a bit to see why.  Here we’ll discuss knowledge without getting too philosophical.  Knowledge is concepts we have found to be true and that we somewhat understand the context, use and function – facts, “laws” of nature, physical constants.  Just recording those facts without understanding context, use, and function would be pretty worthless – a bit like listening to a language you’ve never heard before.  It’s essentially just data.

In that frame of reference, not everything is “knowledge” much less computational knowledge.  How to define what is and isn’t knowledge… well, it’s contextual in many cases and gets into a far bigger discussion of epistemology and all that jive.  A good discussion to have, for sure, but will muddy this one.

Computation

What I suspect is more challenging for folks is the idea of “computational” knowledge.  That’s knowledge we can work out – generate, in a sense, from other things we already know or assume (pure knowledge – axioms, physical constants…).  Computation is a very broad concept that refers to far more than “computer” programs.  Plants, People, Planets, the Universe computes – all these things take information in (input) one form (energy, matter) and converts it to other forms (output).  And yes, calculators and computers compute… and those objects are made from things (silicon, copper, plastic…) that you don’t normally think of as “computational”… but when configured appropriately they make a “computer”.   Now to get things to compute particular things they need instructions – (we need to systemitize… or program it).  Sometimes these programs are open ended (or appear to be!).  Sometimes they are very specific and closed.  Again, here don’t think of a program as something written in Java.  DNA is an instruction set, so are various other chemical structures, and arithmetic, and employee handbooks… basically anything that can tell something else how to use/do something with input.  Some programs, like DNA, can generate themselves.  these are very useful programs.  The point is… you transform input to some output.  That’s computation put in a very basic, non technical way.  It becomes knowledge when the output  has an understandable context, use and function.

Categorizing what is computational knowledge and what is not can be a tricky task.  Yet for a big chunk of knowledge it’s very clear.

Implications and Uses

The follow on question once this is grokked — What’s computational knowledge good for?

The value end result, the computed knowledge, is determined by its use.  However, the method of computing knowledge is valuable because in many cases it is much more efficient (faster and cheaper) than waiting around for the “discovery” of the knowledge by other methods.  For example, you can run through millions of structure designs using formal computational methods very quickly versus trying to architect / design / test those structures by more traditional means.  The same could be said for computing rewarding financial portfolios, AdWords campaigns, optimal restaurant locations, logo designs and so on.  Also, computational generation of knowledge sometimes surfaces knowledge that may otherwise never have been found with other methods (many drugs are now designed computationally, for example).

Web Search

These concepts and methods have implications in a variety of disciplines.   The first major one is the idea of “web search”.  The continuing challenge of web search is making sense of the corpus of web pages, data snippets and streams of info put out every day.  A typical search engine must hunt through this VERY BIG corpus to answer a query.  This is an extremely efficient method for many search tasks – especially when the fidelity of the answer is not such a big deal.  It’s a less efficient method when the search is really a very small needle in a big haystack and/or when precision and accuracy are imperative to the overall task.  Side note: Web search may not have been designed with that in mind… however, users come more and more to expect a web search to really answer a query – often users mistake the fact that it is the landing page, the page that was indexed that is doing the answering of a query.  Computational Knowledge can very quickly compute answers to very detailed queries.  A web search completely breaks down when the user query is about something never before published to the web.  There are more of these queries than you might think!  In fact, an infinite number of them!

Experimentation

Another important implication is that computational knowledge is a method for experimentation and research.  Because it is generative activity one can unearth new patterns, new laws, new relationships, new questions, new views….  This is a very big deal.  (not that this has been possible before now… of course, computation and knowledge are not new!  the universe has been doing it for ~14 billion years.  now we coherent and tangible systems to make it easier and more useful to use formal computation for more and more tasks).

P.S.

There are a great many challenges, unsolved issues and potentially negative aspects of computational knowledge.  Formal computation systems by no means are the most efficient, most elegant, most fun ways to do some things.  My FAVORITE example and what I want to propose one day as the evolution of the Turing Test is HUMOR.  Computers and formal computation suck at humor.  And I do believe that humor can be generated formally.  It’s just really really really hard to figure this out.  So for now, it’s still just easier and more efficient to get a laugh by hitting a wiffle-ball at your dad and putting it on YouTube.

Read Full Post »

Here is one of the best blog posts on putting Wolfram|Alpha into perspective:

Asking which result is “right” misses the point. Google is a search engine; it did exactly what it’s supposed to do. It isn’t making any any assumptions about what you’re looking for, and will give you everything the cat dragged in. If you’re an elementary school teacher or a flat-earther, you can find the result you want somewhere in the big, messy pile. If you want accurate data from a known and reliable source, and you want to use that data in other computations, you don’t want Google’s answer; you want Alpha’s. (BTW, the Earth’s circumference is .1024 of the distance to the Moon.)

When is this important? Imagine we were asking a more politically charged question, like the correlation between childhood vaccinations and autism, or the number of civilians killed in the six-day war. Google will (and should) give you a wide range of answers, from every part of the spectrum. It’s up to you to figure out whether the data actually came from. Alpha doesn’t yet have data about autism or six-day war casualties, and even when it does, no one should blindly assume that all data that’s “curated” is valid; but Wolfram does its homework, and when data like this is available, it will provide the source. Without knowing the source, you can’t even ask the question.

Read Full Post »

The NKS summer school archive site is live.  I figured it would be best for me to wait until that was done before I attempted to post my project or write too much about other’s projects.

You can check out a summary of what I did personally on the Wolfram site.

Project Title 
Perturbing Turing Machines

Project
Perturbations to elementary cellular automata have been investigated thoroughly. Under a certain level of perturbation, there are slight changes to local patterns but the automata tend to recover globally. The more complicated rules show greater disturbance but still can tolerate perturbations. This study considers similar perturbations to Turing machines.

Do Turing machines exhibit similar behavior?

“And the reason this is important is that in any real experiment, there are inevitably perturbations on the system one is looking at” (NKS p. 324). We must account for the effects of perturbations to draw any connections between these simple constructs and their natural counterparts.

Every project done there was interesting.  I encourage readers to check them all out.  Some projects are more abstract than others and all were a good launch pad for further research or immediate practical use.

I do have to call out Ben Rapoport’s project on neuronal computations.  It was beautiful in many ways.

Read Full Post »