Feeds:
Posts
Comments

Archive for the ‘web 2.0’ Category

The aim of most businesses is to create wealth for those working at it. Generally it is preferred to do this in a sustainable, scalable fashion so that wealth may continue to be generated for a long time. The specific methods may involve seeking public valuation in the markets, selling more and more product directly profitably, private valuation and investment and more. The aim of most technology based companies to make the primary activity and product of the business involve technology. Most common understanding of the “technology” refers to information technology, bio technology, advanced hardware and so forth – i.e. tools or methods that go beyond long established ways of doing things and/or analog approaches. So the aims of a technology company are to create and maintain sustainable, scalable wealth generation through technological invention and execution.

Perhaps there are better definitions of terms and clearer articulation of the aims of business but this will suffice to draw out an argument for how technology companies could fully embrace the idea of a platform and, specifically, a technological platform. Too often the technology in a technology company exists solely in the end product sold to the market. It is a rare technology company that embraces technological thinking every where – re: big internet media still managing advertising contracts through paper and faxes, expense reports through stapled papers to static excel spreadsheets and so on. There are even “search” engine companies that are unable to search over all of their own internal documentation and knowledge.

The gains of technology are significant when applied everywhere in a company. A technological product produced by primitive and inefficient means is usually unable to sustain its competitive edge as those with technology in their veins quickly catch up to any early leads by a first, non technical mover. Often what the world sees on the outside of a technology company is The Wizard of Oz. A clever and powerful façade of technology – a vision of smoking machines doing unthinkable things. When in reality it is the clunky, hub bub of a duct taped factory of humans pulling levers and making machine noises. If the end result is the same, who cares? No one – if the result can be maintained. It never scales to grow the human factory of tech facade making. Nor does it scale to turn everything over to the machines.

What’s contemplated here is a clever and emergent interaction of human and machine technology and how a company goes from merely using technology to becoming a platform. Consider an example of a company that produces exquisite financial market analysis to major brokerage firms. It may be that human analysts are far better than algorithms at making the brilliant and challenging pattern recognition observations about an upcoming swing in the markets. There is still a technology to employ here. Such a company should supply the human analysts with as much enhancing tools and methods to increase the rate at which human analysts can spot patterns, reduce the cost in spreading the knowledge where it needs to go and to complete the feedback loop on hits and misses. There is no limit to how deeply a company should look at enhancing the humans ability. For instance, how many keystrokes does it take for the analyst to key in their findings? How many hops does a synthesized report go through before hitting the end recipient? how does the temperature of the working space impact pattern recognition ability? Perhaps all those details are far more of an impact to the sustainable profit than tuning a minute facet in some analytic algorithm.

The point here is that there should be no facet of a business left untouched by technology enhancement. Too often technology companies waste millions upon millions of dollars updating their main technology product only to see modest or no gain at all. The most successful technology companies of the last 25 years have all found efficiencies through technology mostly unseen by end users and these become their competitive advantages. Dell – ordering and build process. Microsoft – product pre-installations. Google – efficient power sources for data centers. Facebook – rapid internal code releases. Apple – very efficient supply chain. Walmart – intelligent restocking. Amazon – everything beyond the core “ecommerce”.

In a sense, these companies recognized their underlying ”platform” soon after recognizing their main value proposition. They learned quickly enough to scale that proposition – and to spend a solid blend of energy on the scale and the product innovation. A quick aside – scale here is taken to mean how efficiently a business can provide its core proposition to the widest, deepest customer base. It does not refer solely to hardware or supply chain infrastructure, though often that is a critical part of it.

One of many interesting examples of such platform thinking is the Coors Brewing company back in its hey day. Most people would not consider Coors a “technology” company. In the 1950s though it changed many “industries” with the introduction of the modern aluminum can. This non-beer related technology reduced the cost of operations, created a recycling sub industry, reduced the problem of tin-cans damaging the beers taste and so on. It also made it challenging on several competitors to compete on distribution, taste and production costs. This isn’t the first time the Coors company put technology to use in surprising ways. They used to build and operate their own powerplants to reduce reliance on non optimal resources and to have better control over their production.

Examples like this abound. One might conclude that any company delivery product at scale can be classified as a technology company – they all will have a significant platform orientation. However, this does not make them a platform company.

What distinguishes a platform technology from simply a technology company is one in which the platform is provided to partners and customers to scale their businesses as well. These are the types of companies where their product itself becomes scale. These are the rare, super valuable companies. Google, Apple, Intel, Facebook, Microsoft, Salesforce.com, Amazon and so on. These companies often start by becoming highly efficient technically in the production of their core offering and then turn that scale and license it to others. The value generation gets attributed to the scale provider appropriately in that it becomes a self realizing cycle. The ecosystem built upon the platform of such companies demands the platform operator to continue to build their platform so they too may scale. The platform operator only scales by giving more scale innovation back to the ecosystem. Think Google producing Android, offering Google Analytics for Free and so on. Think Facebook and Open Graph and how brands rely on their facebook pages to connect and collect data. Think Amazon and its marketplace and Cloud Computing Services. Think Microsoft and the MSDN/developer resources/cloud computing. Think Apple and itunes, app store and so on.

It’s not all that easy though! There seems to come a time with all such platform companies that a critical decision must be made before it’s obvious that it’s going to work. To Open The Platform Up To Others Or Not? Will the ecosystem adopt it? How will they pay for it? Can we deal with what is created? Are we truly at scale to handle this? Are we open enough to embrace the opportunities that come out of it? Are we ready to cede control? Are we ready to create our own competitors?

That last question is the one big one. But it’s the one to embrace to be a super valuable, rare platform at the heart of a significant ecosystem. And it happens to be the way to create a path to sustainable wealth generation that isn’t a short lived parlor trick.

Read Full Post »

First, we will bring ourselves to computers. The small- and large-scale convenience and efficiency of storing more and more parts of our lives online will increase the hold that formal ontologies have on us. They will be constructed by governments, by corporations, and by us in unequal measure, and there will be both implicit and explicit battles over how these ontologies are managed. The fight over how test scores should be used to measure student and teacher performance is nothing compared to what we will see once every aspect of our lives from health to artistic effort to personal relationships is formalized and quantified.

 

[…]

There is good news and bad news. The good news is that, because computers cannot and will not “understand” us the way we understand each other, they will not be able to take over the world and enslave us (at least not for a while). The bad news is that, because computers cannot come to us and meet us in our world, we must continue to adjust our world and bring ourselves to them. We will define and regiment our lives, including our social lives and our perceptions of our selves, in ways that are conducive to what a computer can “understand.” Their dumbness will become ours.

 

from: David Auerbach, N+1.  read it all.   

 

I love this piece.  Brilliant synthesis.  Hard to prove… just have to watch it all unfold.

Read Full Post »

It’s fairly obvious that the next “advertising” land-rush is in mobile.  Really, it’s been that way for a solid 5 years.   What’s not yet clear is how the marketplace will develop.   Up until the explosion of iphones and android there hasn’t been enough demand (inventory) to put into a marketplace that supports bidding, yield management and the associated structures.   It’s now time.

A couple of clear distinctions between mobile advertising and other mediums is how much more you know about the user and little real estate (display and attention) you can get from the user.   i.e. the targeting has to be GREAT for this to work en masse.

Here are my thoughts on what the basics of the algorithms would be for a great mobile ad marketplace.

Targeting

Targeting the user isn’t terribly challenging as a great deal of information is available to the advertising engine about a user.   Knowing where someone is and how often they frequent a location and browse certain info reveals pretty much as much as an ad server would need.

Targeting facets:

  • Time of Day
  • Location (lat/long)
  • Demographic (gender, household income, age)
  • Service Provider
  • Phone/Client
  • Connection Speed
  • Segment (business user, soccer mom, etc.)

Yield Management

Targeting only gets you so far.  The most important aspect of “online” advertising isn’t hitting someone right the first go ’round, it’s getting the funnel right.  Can you take someone from initial view/siting/click of an ad through a transaction with the most profit possible?  that is the essential question in advertising.

Yield Management facets:

  • User click history (time of day, location patterns)
  • price per click/action/view
  • advertiser account balance and history
  • Time of Day
  • Location Features (bar district, business, sports complex, etc.)
  • type of advertiser (restaurant, national advertiser, services business, website, application)
  • type of advertisement (offer/coupon, brand ad, registration, etc)

Creative Execution

Beyond getting the math right it’s important to get the creative – the design, content, UI, IA – of an ad correct.  Targeting and yield can only get you so far… if the ad stinks, well, it stinks.  Local ads are of a different type than Super Bowl ads.   What’s good creative in a specific time and location and context isn’t always what wins a CLEO award.

Ad Capabilities:

  • Text only
  • Display
  • Connected to App
  • Click to SMS/private offer
  • Alerts
  • Customized to user info
  • Connected to Inventory Feeds (where it makes sense)

Bidding

To put the above three in play you need some sense of a bidded marketplace – some way for advertisers to compete for real estate.  Generally that was determined, on older local sites, by very basic algorithms involving who pays the most for the top spot within a category and location (e.g. who pays the top cost per click for Restaurant in NYC gets the top spot).

This approach is no longer sufficient.  The market is there to compete for “top spots.”  However, what’s changed is the concept of top spots.  Owning a keyword on a search engine, even if it’s a “local” search engine doesn’t matter that much and isn’t worth bidding on.   What matters now is are you the ad/sponsor/location/brand that pops up/shows up/is presented when the user passes through a particular latitude+longitude?

Amazingly, the world has been here before.  It’s called a billboard.   Very quickly that’s what local advertising online (in mobile apps) becomes –  a competition for a couple of premium “billboards” in navigation and “check in” apps and social networks.

Bidding algorithm will center on figuring out who pays the most, has the most inventory available and converts the most users over time.  You’ll pay more as advertiser if you are further away, don’t spend enough and/or can’t put buts in seats.  Figuring out how to report those metrics back isn’t that hard as more of our systems (e.g. FB Connect and Open Table and pos systems) become tightly coupled.

Recommendations

If you are still buying online ads based on category and general location keywords or IP location you are wasting your money. And if your ads are still “click” based you are wasting your money.  People don’t click on their mobile phones.  They act.

The Algorithm To Rule Them All (…Locally)

To be published soon!

it goes something like (and this is very much not real math):

show_particular_ad? = category_segment_action_history+historical_action_per_impression*(budget_remaining/cost_per_action) is greater than (other ads values in consideration based on basic relevance of lat/long, keyword, category).

Read Full Post »

Now that both the iPad and Wolfram|Alpha iPad are available it’s time to really evaluate the capabilities of these platforms.

Wolfram|Alpha on the iPad

Wolfram|Alpha iPad

[disclaimer: last year I was part of the launch team for Wolfram|Alpha – on the business/outreach end.]

Obviously I know a great deal about the Wolfram|Alpha platform… what it does today and what it could do in the near future and in the hands of great developers all over the world.  I’m not shy in saying that computational knowledge available on mobile devices IS a very important development in computing.  Understanding computable knowledge is the key to understanding why I believe mobile computable knowledge matters.   Unfortunately it’s not the easiest of concepts to describe.

Consider what most mobile utilities do… they retrieve information and display it.  The information is mostly pre-computed (meaning it has been transformed before your request), it’s generally in a “static” form.   You cannot operate on the data in a meaningful way.  You can’t query most mobile utilities with questions that have never been asked before expecting a functional response.  Even the really cool augmented reality apps are basically just static data.  You can’t do anything with the data being presented back to you… it’s simply an information overlay on a 3d view of the world.

The only popular applications that currently employ what I consider computable knowledge are navigation apps that very much are computing real time based on your requests (locations, directions, searches).    Before nav apps you had to learn routes by driving them, walking them, etc. and really spending time associating a map, road signs and your own sense of direction.   GPS navigation helps us all explore the world and get around much more efficiently. However, navigation is only 1 of the 1000s of tasks we perform that benefit from computable knowledge.

Wolfram|Alpha has a much larger scope!    It can compute so many things against your current real world conditions and the objects in the world that you might be interacting with.   For instance you might be a location scout for a movie and you want to not only about how far the locations are that you’re considering you want to compute ambient sunlight, typical weather patterns, wind conditions, likelihood your equipment might be in danger and so forth.  You even need to consider optics for your various shots. You can get at all of that right now with Wolfram|Alpha.  This is just one tiny, very specific use case.  I can work through thousands of these.

The trouble with Wolfram|Alpha (its incarnations to date)  people cite is that it can be tough to wrangle the right query.   The challenge is that people still think about it as a search engine.   The plain and simple fact is that it isn’t a web search engine.  You should not use it as a search engine.  Wolfram|Alpha is best used to get things done. It isn’t the tool you use to get an overview of what’s out there – it’s the system you use to compute, to combine, to design, to combine concepts.

The iPad is going to dramatically demonstrate the value of Wolfram|Alpha’s capabilities (and vice versa!). The form factor has enough fidelity and mobility to show why having computable knowledge literally at your fingertips is so damn useful.  The iPhone is simply too small and you don’t perform enough intensive computing tasks on it to take full advantage.  The other thing iPad and similar platforms will demonstrate is that retrieving information isn’t going to be enough for people.  They want to operate on the world.  They want to manipulate.  The iPad’s major design feature is that you physically manipulate things with your hands.  iPod does that, but again, it’s too small for many operations.   Touch screen PCs aren’t new, but they are usually not mobile.  Thus, here we are on the cusp of direct manipulation of on screen objects.  This UI will matter a great deal to the user.  They won’t want to just sort, filter, search again.  They will demand things respond in meaningful ways to their touches and gestures.

So how will Wolfram|Alpha take advantage of this?   It’s already VISUAL! And the visuals aren’t static images.  Damn near every visualization in Wolfram|Alpha are real time computed specifically to your queries.   The visuals can respond to your manipulations.  In the web version of Wolfram|Alpha this didn’t make as much sense  because the keyboard and mouse aren’t at all the same as your own two hands on top of a map, graph, 3d protein, etc.

Early on there was a critical review of Wolfram|Alpha’s interface – how you actually interact with the system.  It was dead on in many respects.

WA is two things: a set of specialized, hand-built databases and data visualization apps, each of which would be cool, the set of which almost deserves the hype; and an intelligent UI, which translates an unstructured natural-language query into a call to one of these tools. The apps are useful and fine and good. The natural-language UI is a monstrous encumbrance…

In an iPad world, natural language will sit back-seat to hands on manipulations.  Wolfram|Alpha will really shine when people manipulate the visuals and the data display and the various short cuts. People’s interaction with browsers is almost all link or text based, so the language issues with Wolfram|Alpha and other systems are always major challenges.  Now what will be interesting is how many popular browser services will be able to successfully move over to a touch interface.  I don’t think that many will make it.  A new type of services will have to crop up as iPad apps will not be simply add-ons to a web app, like they usually are for iPhone.  These services will have to be great in handling direct manipulation, getting actual tasks accomplished and will need to be highly visual.

My iPad arrives tomorrow.  Wolfram|Alpha is the first app getting loaded. and yes, I’m biased.  You will be too.

Read Full Post »

I seriously wonder about this all the time.

Spending a lot of time thinking about collective intelligence and collaborative filtering over the last decade has led me to believe that most of the stuff we’re creating actually reduces our vision.

From Facebook to twitter to iphones…. we’re pruning our networks and our opportunities to actually run into new people, new experiences.  Why have a new, uncomfortable conversation at a school function when you can just text your friends on your phone?  why participate in a town hall meeting when you can just join a Facebook group?  why surf the web anymore when twitter can just tell you what’s hot?  why go to a bar for a band you’ve never heard of when Pandora can just pick what you like?

Maybe it’s just me.

Food for thought.

Read the following Edge piece or check out “You are not a gadget”.

34. The Internet today is, after all, a machine for reinforcing our prejudices. The wider the selection of information, the more finicky we can be about choosing just what we like and ignoring the rest. On the Net we have the satisfaction of reading only opinions we already agree with, only facts (or alleged facts) we already know. You might read ten stories about ten different topics in a traditional newspaper; on the net, many people spend that same amount of time reading ten stories about the same topic. But again, once we understand the inherent bias in an instrument, we can correct it. One of the hardest, most fascinating problems of this cyber-century is how to add “drift” to the net, so that your view sometimes wanders (as your mind wanders when you’re tired) into places you hadn’t planned to go. Touching the machine brings the original topic back. We need help overcoming rationality sometimes, and allowing our thoughts to wander and metamorphose as they do in sleep.

David Gelernter

and do all this before midnight tonight when you pre-order your vision reducing iPad, like me!

oh well….

Read Full Post »

Yes, Paul Carr of TechCrunch is right in many ways… the real time web, and people powering it, can’t really handle the truth.   I’ve said in the past too.  The real time web is not going to last as a viable source of data and truth.  To make it reliable it’s going to be far less real time.  Getting to the facts takes time, resources and sometimes vast amounts of thought (by a computer or a human).

What’s troubling though is that there’s a ton more misinformation pain to go through before users and/or companies figure out what to do with all this mass real time web publishing.  This Ft. Hood twitter stuff is pretty bad.  The celebrity death rumors are horrible. how much worse does it have to get before our values catch up? or maybe it’s ok?  maybe deciphering real from fake information is best left up to the end user?  it’s better than less info?

 

Read Full Post »

Whether it’s “valid” or not humans (and probably most animals) make associations of new, unknown things with similar-seeming known things.  In fact, this is the basis of communication.

In the case of discussing new websites/services/devices like Wolfram|Alpha, Bing, Kindle, iPhone, Twitter and so on it’s perfectly reasonable to associate them to their forebears.  Until users/society gets comfortable with the new thing and have a way of usefully talking about it making comparisons to known things is effective in forming shared knowledge.

My favorite example of this is Wikipedia and Wikis.  What the heck is a wiki?  and what the heck is wikipedia based on this wiki?  Don’t get me wrong – I know what a wiki is. But to someone who doesn’t, hasn’t used one, and hasn’t contributed to one it’s pretty hard to describe without giving them anchors based on stuff they do know.  “Online Encyclopedia”, “Like a Blog but more open”…  (for fun read how media used to talk about wikipedia, more here)

More recently is Twitter.  What is it like?  A chat room? a social network?  a simpler blog? IM?  right… it’s all that and yet something different, it’s Twitter.  You know it when you use it.

Just like in nature new forms are always evolving with technology.  Often new tech greatly resembles its ancestories.  Other times it doesn’t.

In the specific case of Wolfram|Alpha and Bing/google… they share a common interface in the form of the browser and an HTML text field.  They share a similar foundation in trying to make information easy to access.  The twist is that Wolfram|Alpha computes over retrieved information and can actually synthesize (combine, plot, correlate) it into new information.  Search engines retreive information and synthesize ways to navigate it.  Very different end uses, often very complimentary.  Wikipedia uses humans to synthesize information into new information, so it shares some concepts with Wolfram|Alpha.  Answers.com and other answer sites typically are a mash up of databases and share the concept of web search engines of synthesizing ways to navigate data.

All of these are USEFUL tools and they ARE INTERCONNECTED.  None of them will replace each other.  Likely they will all co-evolve. And we will evolve our ways of talking about them.

Read Full Post »

One of my favorite things to do everyday is to visit CNBC.com and read about their new prediction of the MARKET BOTTOM.

Threw the power of the Internet we can trace just how completely wrong they are every time.

Lesson: stop predicting things like this.  you can’t do it.

Unless your goal is to entertain… if so, keep doing it, because it is entertaining to me!

Read Full Post »

Michael Lynton responds with a confusing analogy to the blogosphere’s blast of his now infamous comment, “I’m a guy who sees nothing good having come from the Internet. Period.”

The fact that he’s following up to add context is great for his argument and his agenda.  Unfortunately his choice of analogies or the choice to use an analogy muddles his argument.  The Internet isn’t like anything.  The abstract workings of how people behave online is not unlike how they behave offline but the details (actual behaviors, reinforcers and consequences) are very different.  His analogy, the Interstate System, oversimplifies his argument and the ultimate concept he’s chasing: piracy.

Contrast the expansion of the Internet with what happened a half century ago. In the 1950’s, the Eisenhower Administration undertook one of the most massive infrastructure projects in our nation’s history — the creation of the Interstate Highway System. It completely transformed how we did business, traveled, and conducted our daily lives. But unlike the Internet, the highways were built and operated with a set of rational guidelines. Guard rails went along dangerous sections of the road. Speed and weight limits saved lives and maintenance costs. And officers of the law made sure that these rules were obeyed. As a result, as interstates flourished, so did the economy. According to one study, over the course of its first four decades of existence, the Interstate Highway System was responsible for fully one-quarter of America’s productivity growth.

We can replicate that kind of success with the Internet more easily if we do more to encourage the productivity of the creative engines of our society — the artists, actors, writers, directors, singers and other holders of intellectual property rights — yes, including the movie studios, which help produce and distribute entertainment to billions of people worldwide.

What specific success are we replicating (what is this study he cites?)?  How are the physical constraints of the highway system like anything with mostly non-physical Internet?  And the bigger question… how is the function of the highway system (move people about) comparable at all to the Internet (move info, place to exhibit, converse, transaction… and so on)?

I don’t know what will reduce piracy.  I don’t know what will ensure that Sony and others can make as much money from content as they would like.   I do know that Lynton has made no progress to further is argument and perhaps took a step back by not just sticking to this one key point.

But, I actually welcome the Sturm und Drang I’ve stirred, because it gives me an opportunity to make a larger point (one which I also made during that panel discussion, though it was not nearly as viral as the sentence above). And my point is this: the major content businesses of the world and the most talented creators of that content — music, newspapers, movies and books — have all been seriously harmed by the Internet.

At least this is something we can argue.  (I don’t think his statement is accurate and I’ll write on that later).

Read Full Post »

Here is one of the best blog posts on putting Wolfram|Alpha into perspective:

Asking which result is “right” misses the point. Google is a search engine; it did exactly what it’s supposed to do. It isn’t making any any assumptions about what you’re looking for, and will give you everything the cat dragged in. If you’re an elementary school teacher or a flat-earther, you can find the result you want somewhere in the big, messy pile. If you want accurate data from a known and reliable source, and you want to use that data in other computations, you don’t want Google’s answer; you want Alpha’s. (BTW, the Earth’s circumference is .1024 of the distance to the Moon.)

When is this important? Imagine we were asking a more politically charged question, like the correlation between childhood vaccinations and autism, or the number of civilians killed in the six-day war. Google will (and should) give you a wide range of answers, from every part of the spectrum. It’s up to you to figure out whether the data actually came from. Alpha doesn’t yet have data about autism or six-day war casualties, and even when it does, no one should blindly assume that all data that’s “curated” is valid; but Wolfram does its homework, and when data like this is available, it will provide the source. Without knowing the source, you can’t even ask the question.

Read Full Post »

Older Posts »