Feeds:
Posts
Comments

Archive for the ‘software’ Category

We have a problem.

As it stands now the present and near future of economic, social and cultural development primarily derives from computers and programming.   The algorithms already dominate our society – they run our politics, they run our financial system, they run our education, they run our entertainment, they run our healthcare.    The ubiquitous tracking of everything that can possible be tracked determined this current situation.   We must have programs to make things, to sell things, to exchange things.

punchcard

The problem is not necessarily the algorithms or the computers themselves but the fact that so few people can program.    And why?   Programming Sucks.

Oh sure, for those that do program and enjoy it, it doesn’t suck. As Much.   But for the 99%+ of the world’s population that doesn’t program a computer to earn a living it’s a terrible endeavour.

Programming involves a complete abstraction away from the world and all surroundings.  Programming is disembodied – it is mostly a thought exercise mixed with some of the worst aspects of engineering.   Mathematics, especially the higher order really crazy stuff was long ago unapproachable and completely disembodied requiring no physical engineering or representation at all.  Programming, in most of its modern instances, consequences very far away from its creative behavior.  That is, in most modern system it takes days, weeks, months years to personally feel the results deeply of what you’ve built.    Programming is ruthless.  It’s unpredictable.   It’s 95% or more reinventing the wheel and configuring environments to even run the most basic program.  It’s all set up, not a lot of creation.   So few others understand it they can’t appreciate the craft during the act (only the output is appreciated counted in users and downloads).

There are a couple of reasons why this is the case – a few theoretical/natural limits and a few self-imposed, engineering and cultural issues.

First the engineering and cultural issues.   Programming languages and computers evolved rather clumsily built mostly by programmers for other programmers – not for the majority of humans.    There’s never been a requirement to make programming itself more humanistic, more embodied.    Looking back on the history of computers computing was done always in support of something else, not for its own sake.   It was done to Solve Problems.   As long as the computing device and program solved the problem the objective was met.   Even the early computer companies famously thought it was silly to think everyone one day might actually use a personal computer.   And now we’re at a potentially more devastating standstill – it’s absurd to most people to think everyone might actually need to program.    I’ll return to these issues.

Second the natural limits of computation make for a very severe situation.   There are simply things that are non-computable.   That is, we can’t solve them.   Sometimes we can PROVE we can’t solve them but that doesn’t get us any closer to solving some things.    This is sometimes called the Halting Problem.  The idea is basically that for a sufficiently complex program you can’t predict whether the program will halt or not.   The implication is simply you must run the program and see if it halts.  Again, complexity is the key here.  If these are relatively small, fast programs with a less than infinite number of possible outcomes then you can simply run the program across all possible inputs and outputs.   Problem is… very few programs are that simple and certainly not any of the ones that recommend products to you, trade your money on wall street, or help doctors figure out what’s going on in your body.

STOP.

This is a VERY BIG DEAL.    Think about it.   We deploy millions of programs a day with completely non-deterministic, unpredictable outcomes.  Sure we do lots of quality assurance and we test everything we can and we simulate and we have lots of mathematics and experience that helps us grow confident… but when you get down to it, we simply don’t know if any given complex program has some horrible bug in it.

This issue rears its head an infinite number of times a day.   If you’ve ever been mad at MS Word for screwing up your bullet points or your browser stops rendering a page or your internet doesn’t work or your computer freezes… this is what’s going on.  All of these things are complex programs interacting with other programs and all of them have millions (give or take millions) of bugs in them.  Add to it that all of these things are mutable bits on your computer that viruses or hardware issues can manipulate (you can’t be sure the program you bought is the program you currently run) and you can see how things quickly escape our abilities to control.

This is devastating for the exercise of programming.  Computer scientists have invented a myriad of ways to temper the reality of the halting problem.   Most of these management techniques makes programming even more mysteries and challenging due to the imposition of even more rules that must be learned and maintained.   Unlike music and writing and art and furniture making and fashion we EXPECT and NEED computers to do exactly what we program them to do.   Most of the other stuff humans do and create is just fine if it sort of works.  It still has value.  Programs that are too erratic or worse, catastrophic, are not only not valuable we want to eliminate them from the earth.   We probably destroy some 95%+ of the programs we write.

The craft of programming is at odds with its natural limits.   Our expectations and thus the tools we craft to perform program conflict with the actuality.  Our use of programs exceeds their possibilities.

And this really isn’t due to computers or programming, but something more fundamental: complexity and prediction.    Even as our science shows us more and more that prediction is an illusion our demands of technology and business and media run counter.    This fundamental clash manifests itself in programming, programming languages, the hardware of computers, the culture of programming.  It is at odds with itself and in being so conflicted is unapproachable to those that don’t have ability to stare maddeningly into a screen flickering with millions of unknown rules and bugs.   Mastery is barely achievable except for a rare few.   And without mastery enjoyment rarely comes – the sort of enjoyment that can sustain someones attention long enough to do something significant.

I’ve thought long and hard about how to improve the craft of programming.   I’ve programmed a lot, lead a lot of programming efforts, delivered a lot of software, scrapped a lot more.  I’ve worked in 10+ languages.  I’ve studied mathematics and logic and computer science and philosophy.  I’ve worked with the greatest computer scientists.  I’ve worked with amazing business people and artists and mathematicians.   I’ve built systems large and small in many different categories.  In short, I’ve yet to find a situation in which programming wasn’t a major barrier to progress and thinking.

The solution isn’t in programming languages and in our computers.  It’s not about Code.org and trying to get more kids into our existing paradigm. This isn’t an awareness or interest problem.   The solution involves our goals and expectations.

We must stop trying to solve every problem perfectly.  We must stop trying to predict everything.   We must stop pursuing The Answer, as if it actually exists.  We must stop trying to optimize everything for speed and precision and accuracy. And we must stop applying computerized techniques to every single category of activity – at least in a way where we expect the computer to forever to the work.

We must create art.  Programming is art.  It is full of accidents and confusions and inconsistencies.   We must turn it back to an analog experience rather than a conflicted digital.    Use programming to explore and narrate and experiment rather than answer and define and calculate.

The tools that spring from those objectives will be more human.  More people will be able to participate.  We will make more approachable programs and languages and businesses.

In the end our problem with programming is one of relation – we’re either relating more or less to the world around us and as computers grow in numbers and integration we need to be able to commune, shape and relate to them.

Read Full Post »

The human race began a path towards illiteracy when moving pictures and sound began to dominate our mode of communication. Grammar checking word processors and the Internet catalyzed an acceleration of the process. Smartphones, 3-D printing, social media and algorithmic finance tipped us towards near total illiteracy.

The complexity of the machines have escaped our ability to understand them – to read them and interpret them – and now, more importantly, to author them. The machines author themselves. We inadvertently author them without our knowledge. And, in cruel turn, they author us.

This is not a clarion call to arms to stop the machines. The machines cannot be stopped for we will never want to stop them so intertwined with our survival (the race to stop climate change and or escape the planet will not be done without the machines). It is a call for the return to literacy. We must learn to read machines and maintain our authorship if we at all wish to avoid unwanted atrocities and a painful decline to possible evolutionary irrelevance. If we wish to mediate the relations between each other we must remain the others of those mediations.

It does not take artificial intelligence for our illiteracy to become irreversible. It is not the machines that will do us in and subjugate us and everything else. Intelligence is not the culprit. It is ourselves and the facets of ourselves that make it too easy to avoid learning what can be learned. We plunged into a dark ages before. We can do it again.

We are in this situation, perhaps, unavoidably. We created computers and symbolics that are good enough to do all sorts of amazing things. So amazing that we just went and found ways to unleash things without all the seeming slowness of evolutionary and behavioral consequences we’ve observed played out on geological time scales. We have unleashed an endless computational kingdom of such variety rivaling that of the entire history of Earth. Here we have spawned billions of devices with billions and billions of algorithms and trillions and trillions and trillions of data points about billions of people and trillions of animals and a near infinite hyperlinkage between them all. The benefits have outweighed the downsides in terms of pure survival consequences.

Or perhaps the downside hasn’t caught us yet.

I spend a lot of my days researching, analyzing and using programming languages. I do this informally, for work, for fun, for pure research, for science. It is my obsession. I studied mathematics as an undergraduate – it too is a language most of us are illiterate in and yet our lives our dominated by it. A decade ago I thought the answer was simply this:

Everyone should learn to program. That is, everyone should learn one of our existing programming languages.

It has more recently occurred to me this is not only realistic it is actually a terrible idea. Programming languages aren’t like English or Spanish or Chinese or any human language. They are much less universal. They force constraints we don’t understand and yet don’t allow for any wiggle room. We can only speak them by typing them incredibly specific commands on a keyboard connected to a computer architecture we thought up 50 years ago – which isn’t even close to the dominate form of computer interaction most people use (phones, tablets, tvs, game consoles with games, maps and txt messages and mostly consumptive apps). Yes, it’s a little more nuanced than that in that we have user interfaces that try to allow us all sorts of flexbility in interaction and they will handle the translation to specific commands for us.

Unfortunately it largely doesn’t work. Programming languages are not at all like how humans program. They aren’t at all how birds or dogs or dolphins communicate. They start as an incredibly small set of rules that must be obeyed or something definitely will breakdown (a bug! A crash!). Sure, we can write an infinite number of programs. Sure most languages and the computers we use to run the programs written with language are universal computers – but that doesn’t make them at all as flexible and useful as natural language (words, sounds, body language).

As it stands now we must rely on about 30 million people on the entire planet to effectively author and repair the billions and billions of machines (computer programs) out there (http://www.infoq.com/news/2014/01/IDC-software-developers)

Only 30 million people speak computer languages effectively enough to program them. That is a very far cry from a universal or even natural language. Most humans can understand any other human, regardless of the language, on a fairly sophisticated level – we can easily tell each others basic state of being (fear, happiness, anger, surprise, etc) and begin to scratch out sophisticate relationships between ideas. We cannot do this at all with any regularity or reliability with computers. Certainly we can communicate with some highly specific programs some highly specific ideas/words/behaviors – but we cannot converse even remotely close with a program/machine in any general way. We can only rely on some of the 30 million programmers to improve the situation slowly.

If we’re going to be literate in the age of computation our language interfaces with computers must beome much better. And I don’t believe that’s going to happen by billions of people learning Java or C or Python. No it’s going to happen by the evolution of computers and their languages becoming far more human author-able. And it’s not clear the computers survival depends on it. I’m growing in my belief that humanity’s survival depends on it though.

I’ve spent a fair amount of time thinking about what my own children should learn in regards to computers. And I have not at all shaped them into learning some specific language of todays computers. Instead, I’ve focused on them asking questions and not being afraid of the confusing probable nature of the world. It is my educated hunch that the computer languages of the future will account for improbabilities and actually rely on them, much as our own natural languages do. I would rather have my children be able to understand our current human languages in all their oddities and all their glorious ability to express ideas and questions and forever be open to new and different interpretations.

The irony is… teaching children to be literate into todays computer programs as opposed to human languages and expresses, I think, likely to leave them more illiterate in the future when the machines or our human authors have developed a much richer way to interact. And yet, the catch-22 is that someone has to develop these new languages. Who will do it if not myself and my children? Indeed.

This is why my own obsession is to continue to push forward a more natural and messier idea of human computer interaction. It will not look like our engineering efforts today with a focus on speed and efficiency and accuracy. Instead it will will focus on richness and interpretative variety and serendipity and survivability over many contexts.

Literacy is not a complete efficiency. It is a much deeper phenomena. One that we need to explore further and in that exploration not settle for the computational world as it is today.

Read Full Post »

20140120-121843.jpg

Certain activities are fundamental to the human endeavor.

The list:
Acquisition of food, shelter and water
Mate attraction and selection
Procreation
Acquisition or forage of information regarding the first three
Acquisition of goods and services that may make it more efficient to get the first three
Exchange of above three

Only software that deals directly with these activities flourishes into profitable, long term businesses.

I define software here as computer programs running on severs and personal computers and devices.

The race to the bottom in the pricing of information and software and hardware that runs that software ensures that only software businesses that scale beyond all competition can last. Scale means massive and efficient data centers, massive support functions and a steady stream of people talent to keep it all together. And the only activities in the human world that scale enough are those things that are fundamental activities to us all.

Sure there’s a place for boutique and specialist software but typically firms like that are swallowed by the more fundamental firms who bake the function directly into their ecosystem and then they give it away. And this is also why the boutique struggles long term. Those fundamental software builders are always driving the cost down. So even if a boutique is doing ok now it is not sustainable if it is at all relevant. It will be swallowed.

Open source software only reinforces this. In fact it takes this idea to the extreme. The really successful open source projects are always fundamental software (os, browser, web server, data processing) and further drive the price of software to zero.

Scale is the only way to survive in software.

This goes for websites, phone apps, etc. All the sites and apps that focus on niche interests that don’t deal with the fundamental activities above directly either get assimilated into larger apps and sites with broad function or they whiter and die unable to be sustained by a developer.

Read Full Post »

In Defense of The Question Is The Thing

I’ve oft been accused of being all vision with little to no practical finishing capability. That is, people see me as a philosopher not a doer. Perhaps a defense of myself and philosophy/approach isn’t necessary and the world is fine to have tacticians and philosophers and no one is very much put off by this.

I am not satisfied. The usual notion of doing and what is done and what constitutes application is misguided and misunderstood.

The universe is determined yet unpredictable (see complexity theory, cellular automota). Everything that happens and is has anticedents (see behaviorism, computation, physics). Initiatial conditions have dramatic effect on system behavior over time (see chaos theory). These three statements are roughly equivalent or at least very tightly related. And they form the basis of my defense of what it means to do.

“Now I’m not antiperformance, but I find it very precarious for a culture only to be able to measure performance and never be able to credit the questions themselves.” – Robert Irwin, page 90, seeing is forgetting the name of thing one sees

The Question Is The Thing! And by The Question that means the context or the situation or the environment or the purpose. and I don’t mean The Question or purpose as assigned by some absolute authority agent. It is the sense of a particular or relevative instance we consider a question. What is the question at hand?

Identifying and really asking the question at hand drives the activity to and fro. To do is to ask. The very act of seriously asking a question delivers the do, the completion. So what people mistake in me as “vision” is really an insatiable curiousity and need to ask the right question. To do without the question is nothing, it’s directionless motion and random walk. To seriously ask a question every detail of the context is important. To begin answering the question requires the environment to be staged and the materials provided for answers to emerge.

There is no real completion without a constant re-asking of the question. Does this answer the question? Did that answer the question?

So bring it to something a lot of people associate me with: web and software development. In the traditional sense I haven’t written a tremendous amount of code myself. Sure I’ve shipped lots of pet projects, chunks of enterprise systems, scripts here and there, and the occassional well crafted app and large scale system. There’s a view though that unless you wrote every line of code or contributed some brilliant algorithm line for line, you haven’t done anything. The fact is there’s a ton of code written every day on this planet and very little of it would i consider “doing something”. Most of it lacks a question, it’s not asking a question, a real, big, juicy, ambitious question.

Asking the question in software development requires setting the entire environment up to answer it. Literally the configuration of programmer desks, designer tools, lighting, communication cadence, resources, mixing styles and on and on. I do by asking the question and configuring the environment. The act of shipping software takes care of itself if the right question is seriously asked within an environment that let’s answers emerge.

Great questions tend to take the shape of How Does This Really Change the World for the User? What new capability does this give the world? How does this extend the ability of a user to X? What is the user trying to do in the world?

Great environments to birth answers are varied and don’t stay static. The tools, the materials all need to change per the unique nature of the question.

Often the question begs us to create less. Write less code. Tear code out. Leave things alone. Let time pass. Write documentation. Do anything but add more stuff that stuffs the answers further back.

The question and emergent answers aren’t timeless or stuck in time. The context changes the question or shape of the question may change.

Is this to say I’m anti shipping (or anti performance as Irwin put it)? No. Lets put it this way we move too much and ask too little and actual don’t change the world that much. Do the least amount to affect the most is more of what I think is the approach.

The question is The Thing much more than thing that results from work. The question has all the power. It starts and ends there.

Read Full Post »

Here’s a lovely piece about how curiosity has less CPU horsepower than an iPhone5.

The very cool thing I key on here is the clever solution to this incredible technical achievement of having a rover on mars doing all this science:

Each day, after the lander downloads the latest batch of data to the 100 scientists watching her movements, the team determines what they want her do next and make sure that their goals align with Curiosity’s capabilities. Then the software team writes the necessary script and sends it off via uplink. Because of the roughly 14 minutes it takes for the instructions to reach Mars, all of this has to be done within the window, when Curiosity is sleeping.

The technology is actually a dance.  A dance between all the information going between the red planet, Curiosity, the void, into the earth bound computers, into the scientists brains, back out into the computers, back to the void, back to Curiosity…. a musical remix ever evolving.   The team behind Curiosity didn’t attempt to program the be all and end all of Curiosity.  Instead they came up with some building blocks and a language to communicate and agreed to dance.

And the bigger idea here is that everything is connected.  To work on interesting, important, useful problems the approach is an interplay between humans, machines, software – it is rarely a steady state solution or even a discrete solution.

Read Full Post »

I got into the car with Justin Bieber pouring is saccharine platitudes out of my speakers. It made me wonder are we some freakish society that bears children, trains them to train themselves to be pop stars and then sucks on that til it’s not so sweet and then spits it out. Rinse. Repeat.

Or have there always been such societies where the popular ideas are so easy to ride to fame and fortune? And the popular ideas so unfulfilling the only thing we can do is take more hits?

Read Full Post »

First, we will bring ourselves to computers. The small- and large-scale convenience and efficiency of storing more and more parts of our lives online will increase the hold that formal ontologies have on us. They will be constructed by governments, by corporations, and by us in unequal measure, and there will be both implicit and explicit battles over how these ontologies are managed. The fight over how test scores should be used to measure student and teacher performance is nothing compared to what we will see once every aspect of our lives from health to artistic effort to personal relationships is formalized and quantified.

 

[…]

There is good news and bad news. The good news is that, because computers cannot and will not “understand” us the way we understand each other, they will not be able to take over the world and enslave us (at least not for a while). The bad news is that, because computers cannot come to us and meet us in our world, we must continue to adjust our world and bring ourselves to them. We will define and regiment our lives, including our social lives and our perceptions of our selves, in ways that are conducive to what a computer can “understand.” Their dumbness will become ours.

 

from: David Auerbach, N+1.  read it all.   

 

I love this piece.  Brilliant synthesis.  Hard to prove… just have to watch it all unfold.

Read Full Post »

After a TechCrunch article writer by Sarah Lacy posted August 22, 2011

A few months ago Sarah Lacy, a TechCrunch.com writerwas giving a talk in her hometown of Memphis, TN, and someone asked what the city could do to ignite more entrepreneurship among inner city kids. Her immediate answer was to teach coding– even basic app building skills– along with English and Math in every public school. She was surprised that her brother– an engineer who worked for many years in Silicon Valley before relocating to the Midwest– didn’t necessarily agree.

The thing is that while this is a first level issue of who gets the jobs needed in coding – foreign or domestic coders, it occurred to me that we are in the 30th year or so of serious code writing and it has had some unanticipated consequences.  The changes in the world that have been brought about by the Internet and technology have changed what is done by people.  Now, more and more what is done is done by software applied to different technologies.  The world of TechCrunch and other quasi-geek clusters are alive and well due to the prevalence of algorithms.  They are the workers in a mired of different ways today.

They paint the cars, cut the steel, do the book binding, print the content, answer the phone and a zillion other things that we all used to do.  In a cumulative way the jobs that were are now being done by technology just like was the case when ol’ Ned Lud (see emphatic published accounts for the most favorite spelling…) brought to mythical status between 1779 and 1812 that changes in British textile practices were coming to a screeching halt.

No, I am not being Luddite here.  I am simply pointing out that, when all the talking heads whine and moan about this political union or that political union not producing jobs for the reconstitution of the economy, they should take note; the jobs in the past that went away aren’t coming back.   Many of them aren’t coming back due to being  long overdue to be absorbed before the downturn and no one – or not many, took notice.

Instead of asking for someone else to provide jobs, it is time to create jobs based on that uncomfortable situation that we find ourselves in every 70-90 years.  Change has overtaken the status quo.  Now we need to create jobs that machines can’t do – yet.  That is, jobs involving organizing communities, infrastructure, law, education and human-care… for children, for families in transition, for elders and for soldiers who are brought back and deposited on the steps of America.  They were taught how to do what was necessary to what they had to do to survive.  Nowhere is the training they get any better for that purpose.  Now however, they have done that under duress, for double tours, etc. etc. etc.  To be spit out by those that trained them as worn out and disposable civilians with defects without the slightest bit of care on how to survive reestablish domestic values, is despicable.  Software and algorithms can’t pull that off.  We can if we stop waiting for someone else to do something we favor or don’t find dogmatically repugnant.

HP’s decision to go big and purchase the U.K.’s Autonomy Corp., and probably other players doesn’t seem so ridiculous under a ‘software good – hardware sad’ scenario, does it.

Read Full Post »

Almost all humans do all the following daily:

  • Eat
  • Drink Water
  • Sleep
  • Breath
  • Think about Sex/Get Sexually Excited
  • Communicate with Close Friends and Family
  • Go to the bathroom
  • People Watch
  • Groom

Almost all humans do the following very regularly:

  • Work (hunt, gather, desk job, factory job, sell at the market)
  • Have sex or have sexual activities
  • Listen to or play music
  • Play
  • Take inventory of possessions (count, tally, inspect, store)

A good deal of humans do the following regularly:

  • Go to school/have formal learning (training, go to school, college, apprenticeship)
  • Cook/Prepare Food
  • Read
  • Compete for social status
  • Court a mate

Fewer humans do the following occasionally:

  • Travel more than a few miles from home
  • Write (blog, novel, paper)
  • Eat away from home
  • Stay somewhere that isn’t their home
  • Exercise outside of work tasks (play sports, train, jog)

I’m sure we can think up many more activities in the bottom category probably not many more in the top 3 categories.

For a technology to be mass market successful it has to, at its core, be about behaviors in the top categories.   And it has to integrate with those behaviors in a very pure way, i.e. don’t try to mold the person, let the person mold the technology to their behavior.

I define mass market success as “use by more than 10% of the general population of a country.”   Few technologies and services achieve this.   But those that do all deal with these FHAs.  Twitter, Google, MySpace, Facebook, Microsoft, TV, Radio, Telephone, Cellular Phone…. the more of those activities they deal with the faster they grow.   Notice also that almost all of these examples do not impose a set of specific use paths on users.  e.g. Twitter is just a simple messaging platform for that you can use in a bazillion contexts.

It’s not about making everything more efficient, more technologically beautiful.   It is about humans doing what they’ve done for 100,000+ years with contemporary technology.  If you want to be a successful service, you have to integrate and do a behavioral evolution with the users.

Read Full Post »

The iPad, like the iPod, iPhone, and iMac isn’t a revolution in computer science, design interface, consumer packaging nor ui. It’s a revolution of the economics of those things. Now that there’s a device on the market now at 500 bucks and an unlimited data plan for 30 bucks a month it’s almost assured that the iPad type of computing and media platform will be popularized and maybe not even by apple. The hype of the technology will surely drown out the economic story for some time but in the long run the implications of the price of this technology will be the big story.

Sure we have sub 500 dollar computers and media devices. they have never been this functional or this easy. Apple has just shown what is possible so now the other competitors will have to follow suit. It really doesn’t matter in the grand scheme if it’s apple or htc or google or microsoft or Sony who wins the bragging wars each quarter – the cat is out of the bag – cost effective, easy to use, and fun computing for everyone is possible in a mass producible construction.

There are some interesting side effects coming out of this. If a business can’t make huge profits from the hardware or the connection or the applications where will the profit come from? (I’m not saying companies won’t mark good profits I just don’t think it will be sustainable – especially for companies used to big margins.)

Obviously the sales of content matters. Books, movies, games, music and so on. This computing interface makes it far more easy to buy content and get a sense that it was worth buying. If the primary access channel is through a browser I think people aren’t inclined to pay – we all are too used to just freely browsing. On a tablet the browser isn’t the primary content access channel.

The challenge for content providers is that quality of the content has to be great. This new interface requires great interactivity and hifi experiences. Cutting corners will be very obvious to users. There’s also not really some easy search engine to trick into sending users to a sub par experience. That only works when the primary channel is the browser.

If advertising is going to work well on this platform boy does there have to be a content and interaction shift in the industry. Banners and search ads will just kill an experience on this device. Perhaps more old school magazine style ads will work because once your in an app you can’t really do some end around or get distracted. Users might be willing to consume beautiful hifi ads. Perhaps the bigger problem is that sending people to a browser to take action on an ad will be quite weird.

Clicks can’t be the billable action anymore. Clicks aren’t the same on a tablet! (in fact, most Internet ads won’t work on the iPad. Literally. Flash and click based ads won’t function)

Perhaps the apps approach to making money will work. To date the numbers don’t add up. Unless users are willing to pay more for apps than they do on the iPhone only a handful of shops will be able to handle the economics of low margin, mass software. So for the iPad apps seem to be higher priced. More users coming in may change that though.

In a somewhat different vein…. Social computers will be a good source of cold and flu transmission. If we’re really all going to be leaving these lying about and passing them between each other, the germs will spread. Doesn’t bother me, but some people might consider that.

Will users still need to learn a mouse in the future?

Should we create new programming interfaces that are easier to manipulate with a touch screen. Labview products come to mind?

What of bedroom manners? The iPhone and blackberries are at least small…

And, of course, the porn industry. The iPhone wasn’t really viable as a platform. This touch based experience with big screens… Use your imagination and I’m sure you can think up some use cases…

I do think this way of interacting with computers is here to stay. It’s probably a good idea to think through how it changes approaches to making money and how we interact with each other. I’d rather shape our interactions than be pushed around unknowingly….

Happy Monday!

Read Full Post »

Older Posts »