A variety of thinkers and resources seem to converge on some fundamental ideas around existence, knowledge, perception, learning and computation. (Perhaps I have a confirmation bias and have only found what I was primed to find).
Kurt Godel articulated and proved what I believe to be the most fundamental idea of all, the Incompleteness Theorem. This theorem along with analog variants in the Halting Problem and other aspects of complexity theory provides us the notion that there is a formal limit to what we can know. And by “to know” I mean it in the Leibnizen sense of perfect knowledge (scientific fact with logical proof, total knowledge). Incompleteness tells us even with highly abstract, specialized formal systems there will always be some statement WITHIN that system that is true but cannot be proved. This is fundamental.
It means that no matter how much mathematical or computational or systematic logic we work out in the world there are just some statements/facts/ideas that are true but cannot be proven to be true. As the name of the theorem suggests, though it’s mathematical meaning isn’t quite this, our effort in formalizing knowledge will remain incomplete. There’s always something just out of reach.
It is also a strange fact that one can prove incompleteness of a system and yet not prove trivial statements within these incomplete formal systems.
Godel’s proof and approach to figuring this out is based on very clever re-encoding of formal systems laid out by Betrand Russell and A Whitehead. This re-encoding of the symbols of math and language has been another fundamental thread we find through out human history. One of the more modern thinkers that goes very deep into this symbolic aspect of thinking is Douglas Hofstadter, a great writer and gifted computer and cognitive scientist. It should come as no surprise that Hofstadter found inspiration in Godel, as so many have. Hofstadter has spent a great many words on the idea of strange loops/self-reference and re-encodings of self-referential systems/ideas.
But before the 20th century Leibniz and many other philosophical, artistic, and mathematical thinkers had already started laying the groundwork around the idea that thinking (and computation) is a building up of symbols and associations between symbols. Of course, probably most famously was Descartes in coining “I think, therefore I am.” This is a deliciously self-referential, symbolic expression that you could spend centuries on. (and we have!)
Art’s “progression” has shown that we do indeed tend to express ourselves symbolically. It was only in more modern times when “abstract art” became popular that artist began to specifically avoid overt representation via more or less realistic symbols. Though this obsession with abstraction turns out to be damn near impossible to pull off, as Robert Irwin from 1960 on demonstrated with his conditional art. In his more prominent works he did almost the minimal gesture to an environment (a wall, room, canvas) and found that almost no matter what, human perception still sought and found symbols within the slightest gesture. He continues to this day to produce conditional art that seeks to have pure perception without symbolic overtones at the core of what he does. Finding that it’s impossible seems, to me, to be line with Godel and Leibniz and so many other thinkers.
Wittgenstein is probably the most extreme example of finding that we simply can’t make sense of many things, really, in a philosophical or logical sense by saying or writing ideas. Literally “one must be silent.” This is a very crude reading and interpretation of Wittgenstein and not necessarily a thread he carries throughout his works but again it strikes me as being in line with the idea of incompleteness and certainly in line with Robert Irwin. Irwin, again no surprise, spent a good deal time studying Wittgenstein and even composed many thoughts about where he agreed or disagreed with Wittgenstein. My personal interpretation is that Irwin has done a very good empirical job of demonstrating a lot of Wittgensteinien ideas. Whether that certifies any of it as the truth is an open question. Though I would argue that saying/writing things is also symbolic and picture-driven so I don’t think there’s as clear a line as Wittgenstein drew. As an example, Tupper’s Formula is an insanely loopy mathematical function that draws a graph of itself.
Wolfram brings us a more modern slant in the Principle of Computational Irreducibility. Basically it’s the idea that any system with more than very simple behavior is not reducible to some theory, formula or program that can predict it. The best we could do in trying to fully know a complex system is to watch it evolve in all its aspects. This is sort of a reformulation of the halting problem in such a way that we might more easily imagine other systems beholden to this reality. The odd facet of such a principle is that one cannot really prove with any reliability which systems are computational irreducible. (P vs NP, etc problems in computer science are akin to this).
Chaitin, C. Shannon, Aaronson, Philip Glass, Max Richter, Brian Eno and many others also link into this train of thought….
Why do I think these threads of thought above (and many others I omit right now) matter at all?
Nothing less than everything. The incompleteness or irreducibility or undecidability of complex systems (and even seemingly very simple things are often far more complex than we imagine!) is the fundamental feature of existence that suggests why, when there is something, there’s something rather than nothing. For there to be ANYTHING there must be something outside of full description. This is the struggle. If existence were reducible to a full description there would be no end to that reduction until there literally was nothing.
Weirder, perhaps still, is the idea is the Principal of Computational Equivalence and Computational Universality. Basically any system that can compute universally can emulate any other universal computer. There are metaphysical implications here that if I’m being incredibly brash suggest that anything complex enough can and/is effectively anything else that is complex. Again tied to the previous paragraph of thought I suggest that if there’s anything at all, everything is everything else. This is NOT an original thought nor is it as easily dismissed as whacky weirdo thinking. (Here’s a biological account of this thinking from someone that isn’t an old dead philosopher…)
On a more pragmatic level I believe the consequences of irreducibility suggest why computers and animals (any complex systems) learn the way they learn. Because there is no possible way to have perfect knowledge complex systems can only learn based on versions of Probably Approximately Correct (Operant Conditioning, Neural Networks, Supervised Learning, etc are all analytic and/or empirical models of learning that suggest complex systems learn through associations rather than executing systematic, formalized, complete knowledge) Our use of symbolics to think is a result of irreducibility. Lacking infinite energy to chase the irreducible, symbolics (probably approximately correct representations) must be used by complex systems to learn anything at all. (this essay is NOT a proof of this, this is just some thoughts, unoriginal ones, that I’m putting out to prime myself to actually draw out empirical or theoretical evidence that this is right…)
A final implication to draw out is that of languages and specifically of computer languages. To solve ever more interesting and useful problems and acquire more knowledge (of an endless growing reservoir of knowledge) our computer languages (languages of thought) must become more and more rich symbolically. Our computers, while we already make them emulate our more rich symbolic thinking, need to have symbolics more deeply embedded in their basic operations. This is already the trend in all these large clusters powering the internet and the most popular software.
As a delightful concluding, yet open unoriginal thought from this book by Flusser comes to mind… Does Writing Have a Future suggests that ever more rich symbolics than the centuries old mode of writing and reading will not only be desired but inevitable as we attempt to communicate in more vast networks. (which, won’t surprising, is very self-referential if you extend the thought to an idea of “computing with pictures” which really isn’t different than computing with words or other representations of bits that represent other representation of bits…) I suppose all of this comes down to seeing which symbolic prove to be more efficient in the total scope of computation. And whatever interpretation we assign to efficient is, by the very theme of this essay, at best, an approximation.
Leave a Reply