Tuesday, May 31, 2016

Telirati Tips #1 Sony RAW Noise and Bricking Problems and Solutions

Here we'll take a short break from mobile telecommunications, IoT, project management and other Serious Topics to cover a little photography. I recently found some commonplace problems with my camera, and solutions to those problems:
  1. Noisy RAW files
  2. Bricked cameras when updating

I set out to see if a firmware update would cure a problem with excess noise in RAW images from my Sony a6000, and on my way to find out, I discovered that Sony's Mac OS X firmware updater is a flaming bag of poop that bricked my camera. What I learned on my way to a solution is probably applicable to other similar Sony cameras.

The Sony a6000 is a wonderful camera. I bought one when it first came out as an upgrade from my NEX-5. In silver, it has a classic look without pandering to hipster faux 1950s rangefinder affectations. With 24 megapixels in an APS-C sensor, it packs prosumer DSLR specs into an under $1000 compact camera body. Sony's mirrorless product line got me back into photography, starting with the NEX-5, which is a modern classic of industrial design and a tour de force of camera technology packed in a tiny magnesium body. I especially like shooting with an old Canon f1.4 50mm lens on an adapter/speed booster that brings the effective wide-open aperture to around f1.2, with a scalpel-fine depth of field.

I also enjoyed treating the sensor in the NEX-5 as if it were an electronic sheet of film, using RAW image data and digital darkroom software like Rawtherapee to perform the kinds of correction modern cameras normally do for you. The problem was that the RAW files uploaded from the a6000 were excessively "noisy." Areas that should have been smooth were speckled with what looked like random noise. So I was constrained to using the jpeg files, which were, really, just fine. But it continued to annoy me that I wasn't getting at exactly what the lens laid down on the sensor.

Recently it occurred to me I should check the firmware versions. I downloaded the firmware updater from Sony's site, I borrowed a Mac to run it on, and proceeded to run the updater. The updater informed me I was upgrading from firmware version 1.00 to version 3.10. Excellent! With so many missed updates I felt my odds were good that there was a fix for noisy RAW files in there somewhere.

The updater has a spartan use interface with a text area purportedly reflecting the state of the update and prompting me to perform various steps like connecting the usb cable and selecting the correct mode on the camera for the updater to run. It appeared that the update completed correctly, based on what was on the updater was telling me. I clicked on the "Finish" button and, somewhat to my horror, the camera did not restart. The screen was blank. A red LED near the battery door was on. Turning the camera off and back on did not help. Nor did pulling the battery. Re-running the updater yielded the same result.

A search for similar problems turned up a lot of untested advice: Turn it off, try again, take out the battery, etc. None of those nostrums helped. I started to search for official support from Sony for bricked cameras. None. You're on your own.

It turns out only one thing matters: The Mac must not enter a power-saving state during the update. It if does, it may appear that the update has completed, but the updated firmware will be corrupted and it will not boot. If you find yourself with a bricked camera, do this: 
  1. Pull the battery and put it back
  2. Turn the camera on
  3. Exit the updater app
  4. Start the updater app
  5. Connect the camera to a usb cable
  6. Follow the steps in the updater, skipping those like checking the version which can't be performed with a bricked camera
  7. Make certain the computer does not enter a power-saving stated by periodically moving the mouse cursor
If you follow these steps, your camera should turn on when the update is completed.

The really good news is that the firmware update appears to have fixed the "noisy RAW files" issue! I am happily using my favorite digital darkroom workflow again.

Friday, May 20, 2016

Telirati Analysis #18 The QUIC Brown Fox Jumped Over the Top of Carrier Messaging, or Allo, Duo, WebRTC, QUIC, Jibe, and RCS, Explained

Vulpes_vulpes,_Red_Fox,_Zorro.jpg (4532×2388)
Source: Wikipedia.org

At Google I/O 2016, Google announced two new messaging products: Allo, for text messaging, and Duo, for video communications. These are the most recent in a series of messaging products Google has created, none of which have succeeded in attracting a really large user community the way that other messaging products have done. Google doesn't release figures for monthly active users of Hangouts, while WhatsApp has a billion users, Facebook Messenger and QQ have 850 million, and WeChat has about 700 million. The stakes in messaging are very high, and, so far, Google is an also-ran.

In 2015, it looked like Google might go in a different direction, perhaps acting as a spoiler for proprietary messaging apps that don't interoperate and don't use carrier protocols like SMS and MMS. Google bought a company called Jibe that makes next-generation messaging servers for standard telecom protocols called Rich Communications Services, or RCS. If Google based a messaging system on RCS it would be inherently open and would interoperate with any client or server implementing a compatible RCS profile. Standards and interoperation could be a shortcut to wider use.

Are Allo and Duo the first shots fired in that battle? The short answer is "No." It looks like Allo and Duo have nothing to do with Jibe RCS, or RCS in general. Instead they are aimed at providing a better messaging experience, providing messaging privacy, and providing decent performance in challenging network conditions. Duo uses QUIC, a protocol that combines all the things, like  throttling and encryption, that one has to build on top of UDP to do efficient and secure multimedia communications on wireless IP networks. Duo's claimed advantage is better performance in conditions where other video messaging apps could become unusable. But the signaling to set up tDuo video calls is WebRTC, not RCS. The protocol used to move video call payload is QUIC.

Here is some information on QUIC: https://www.chromium.org/quic

End users may be getting whiplash from Google's changes of direction, and the tactical approach they are taking with a product for each kind of partnership or competitor.

Moreover RCS messaging gets viewed askance because carriers are required to provide lawful intercept (LI) capability - a built-in law enforcement back door - for their messaging as well as for calls. Therefore,  if Google provides RCS signaling and messaging for a carrier, or if Project Fi is a carrier, they would also have to provide LI for RCS-based messaging. Users of messaging apps that go "over the top" (OTT) of carrier networks are increasingly aware of security and are choosing more-secure apps like Whatsapp and Telegram.

To provide a high quality response to increased security awareness, Google is using Open Whisper Systems's (OWS) encryption for a secure mode in Allo,and the QUIC protocol stack has end to end encryption built in for real-time communication. OWS makes open source encryption products that have a first tier reputation among security experts. Allo and Duo should have some of the best security for communication available.

Despite all the confusion Google has managed to create, the technologies behind these products, especially QUIC, are still of interest, and it remains possible that OWS end-to-end encryption could end up in Google's as yet unannounced RCS-based products.

Friday, January 01, 2016

Telirati Analysis #17: Google jukes around Oracle's copyright play, and what Oracle is missing out on

Android is client Java

Android applications are, by several orders of magnitude, the dominant form of client Java software. The only widely used interactive Java applications, other than Android apps, are integrated development environments (IDEs) which are big, complex software creation tools.

Oracle is breaking the business of software creation

Oracle, which now owns the leading proprietary implementation of Java, should be grateful that client Java has been revived. Instead, Oracle has decided this is an opportunity to litigate poorly established parts of intellectual property law, vexing Google, Android developers, and tool-makers in the Android ecosystem. Oracle has made various claims, one of the most destructive to software development in general is that software interface specifications, usually known as "APIs" or "application programming interfaces" can by copyright protected.

This claim is deleterious to the whole software business and nonsensical. It is like claiming that the information that your washing machine uses 3/8 inch bolts to mount the motor is covered by copyright. It is longstanding doctrine that facts like the size of a bolt can't be copyright protected from dissemination. Similarly, the symbolic names and data types used in method calls has, for decades, been assumed to be a similar set of facts. Oracle may, however, succeed in lawyering this into a point of contention.

If Oracle prevails, many published APIs will come under copyright claims and license fee demands. This is an industry-wide disaster in the offing.

Fortunately, Sun liked the GPL

Sun Microsystems has been of two minds about Java, sometimes claiming it is proprietary and sometimes working to assure the software industry that it is an open standard. To promote the latter, they created OpenJDK, an implementation of Java licensed under the GNU General Public License, an open source license that strongly discourages claims of proprietariness in derivative works. This is in contrast to the Apache license Google adopted for all the non-proprietary parts of Android that Google created, which allows OEMs and integrators to hold their enhancements to Android as proprietary code.

Where's the Java?

We've used the name "Java" loosely. Most people would say "Android runs Java," but, strictly speaking, that's not true. There is no Java in an Android device. All the Java bytecode in an Android application is converted to Dalvik bytecode before being packaged in an "apk" file. The Android runtime environments (Dalvik and ART) don't know anything about Java bytecodes. It may look like Java code is being executed with the expected Java-like semantics, but it isn't.

So, where's the Java? Up to now, you had to get the Oracle a certain version of the JDK (Java development kit), freely available from Oracle's web site, in order to create Android software. That's because Android used interface specifications from the Oracle JDK.

Google embraces OpenJDK

By embracing OpenJDK, Google has sidestepped potential licensing demands, at the cost of having to step up and contribute to an open source project that hasn't kept up with the proprietary JDK. This is good for Google, even though it should not have been necessary, and it is good for all Java developers, because it prevents Oracle from imposing licensing fees on anyone using the Oracle JDK. It is also a step toward an open and unencumbered Java standard.

And that's a pretty good beginning to a New Year of Java development.

What should Oracle be doing?

Oracle's stand has been spiteful, contrary, and vexing to the whole software industry, which relies on the ability to use information about APIs. Oracle is willing to overturn many apple carts just to mess with Google. This is a management style that is on it's way out in the software industry, and it paints Oracle as a has-been, fighting a rearguard against nosql databases eroding the grip it had on the database business. The web doesn't need Oracle, and Oracle appears to be thrashing about, lawyering for money instead of making new things to sell.

Imagine what Oracle could have done by cooperating with Google: Development tools, Java technologies, and vast new product areas to extract new revenue streams. Many of these opportunities have passed by due to Oracle's litigiousness. Being a ruthless bastard is one of those strategies that can be made to look good for a while, and then it stops being effective and turns into a burden.

Monday, December 29, 2014

Telirati Analysis #16 Practical project management

Software projects and traditional project management tools have always been a dangerous combination. But, you can understand the attraction: Gantt and CPM chart editors can be a delightfully visual way to plan a project. Even a novice gets a thrill of insight and the feeling of control.

Project management tools were originally made for creating big, expensive physical objects, like bridges, subway lines, and skyscrapers. These projects merited hiring professional project managers, and the nature of the projects left little room for ambiguity: Either 27 floors of steel are up, or not. Either the interior framing up to the 10th floor is ready for the electricians and plumbers, or not. In this world every measurement of task completion is represented by a physical object that can be directly inspected. Almost all the tasks in these physical-world projects have been done thousands, even millions of times before. Anomalies are relatively easy to spot, and few such anomalies are dangerous to an entire project.

The mismeasure of software

Contrast this with software development: If you use the same tools as people who build skyscrapers, you are locked-in to a largely "waterfall" model of development. That's out of fashion for good reason. The plan is rigid and subject to misreported completion. Projects die with 80% of every task complete but with few of them actually finished, done, put to bed. Many of these pathologies can be traced to two differences in software creation:

Software creation is all about doing new things. unlike buildings or roads, once you have built one kind of software, you can make a billion copies at negligible cost. So all new software is really new. Each significant task has never been done before and need never be repeated. If bridges were made of tens of thousands, or tens of millions of unique, hand-crafted parts, with the quality of each part highly dependent on the skill of the maker, you would find traditional project management tools completely inadequate to the task.

It is easy to mismeasure software task completion. Module tests tell you only so much. You can be convinced a task is complete and it is only half done.

Then combine the rigidity of traditional project management methods with misuse by inexperienced but enthusiastic software team managers and you have a truly deadly mix: "All these tasks can run in parallel, right?" "Resource leveling? What's that?"

Project management made for software

Software creation has bred it's own style, not to say dogma, of project management: Agile. As with other organized religions it is a canonicalization of a collection of possibly useful parables combined with a reframing of existing myths and some novel vocabulary for the priesthood to sling. Although the True Believers will tell you it's not a fault of Agile but rather the application, Agile has become notorious for allowing projects to run, incrementally, out of control, while providing the illusion of tight control.

We won't rehash all of it here, but suffice it to say that Agile has earned a backlash. Pathologies typical of Agile projects may well reflect problems in the organizations using Agile rather than Agile itself, but that's hardly an excuse, any more than the pathologies of traditional project management are an excuse for its unsuitability to software projects.

That said, Agile has key benefits:
  • Accessibility and inclusivity in planning
  • Agility: adapting a plan to changed requirements is easy
  • Change tracking, accountability
  • Done is done, and what's not done is tracked

How can we forge a practical, undogmatic tool set for software project management that works in the reality of how software is created? And, how can you, as someone not steeped in traditional project management and in Agile, feel confident enough to commit the heresy of a mixed approach?

For one thing, you can benefit from our experience. We use available tools in a practical combination, with some key goals in mind:
  • Determine if a project is realistic, and go back to the basics of the idea if it isn't
  • Keep tactical implementation decisions flexible
  • Retain data for measuring plan-to-actual performance and other accountability metrics
  • Don't bog a project down in replanning, but track changes
The most important overall goal is to retain the control over total project scope and cost that an up-front analysis provides while reaping as many benefits of Agile methods and tools as possible.

Toward a practical hybrid approach to project management

These are some of the ways we work toward those goals:
  • Use a traditional resource-loaded CPM/Gantt chart project scheduling software package to find places where the project is resource-bound.
  • Use resource leveling, either performing this aspect of planning manually, by using resource loading reports to identify over-committed resources, or by using automatic resource leveling if you have traditional project management software that has this capability.
  • Find milestones that can be used to funnel multiple task completions to a choke point that must be completed before subsequent tasks are started, and get a best-estimate for total project completion. This is for planning only, not tracking.
  • Defining a minimum viable product is very useful for planning. Take your task list and draw a horizontal line. Some tasks must be part of a minimum viable product (MVP), and some are not. Those go below the line. Make sure your project can reach a complete minimum viable product as quickly as possible. 
  • When composing task "stories" do not allow those stories to stray outside the MVP definition, unless your project has ample slack to accommodate the extra work. You won't know that until late in the project's timeline.
  • Use a "swim lane" style task manager to run tactical resource allocation and keep decisions about resources fluid enough to not be bound to a rigid waterfall process. JIRA, in particular, has more than enough parameters and options to be configured for a hybrid project management approach.
  • Use your up-front project analysis as a way of preventing task-by-task mission creep. If a scrum adds up to a lot more hours than your up-front analysis predicted, review each task and watch that tasks do not stray beyond the functional requirements, and if they do, capture the intent by comparing stories to the preliminary task descriptions.
  • Keep a version-controlled spreadsheet (Google Sheets is convenient for this purpose) that tracks added and deleted tasks and their resource and time estimates.
  • Use milestones as gates that must be crossed before earlier parts of the project can be confirmed as completed. If the milestone is not complete, the project is in a state of day-for-day slippage.
  • When creating scrum boards refer to the CPM chart and avoid spanning major milestones with sprints.
  • Defer resource allocation to each sprint planning session to avoid locking the project into fragile up-front predictions about who should implement what.
This keeps a project nimble, not to say "agile," it provides adequate discipline before you find yourself at the end of the project schedule and are surprised it didn't all come together in the last week, and it gives me, as a consulting participant in projects, the documentation needed to take to a client when change requests add up and a re-estimation is needed, and it does so in a responsive framework that avoids re-bidding the whole project.

Saturday, December 20, 2014

Telirati Analysis #15 Write Less Android Software

You have a substantial budget. You have an army, perhaps a foreign mercenary army, of developers. You've got experience delivering big Web projects this way. But your Android project might as well be Afghanistan. Unexpected limitations, difficult bugs, poor performance, and bloat plague you for weeks and months.

Android devices have become big, but Android is still for small, limited devices

Some Android devices have become capacious and powerful. The processor benchmarks for flagship handsets rival processors commonly found in PCs. The RAM capacity in most phones is still less than half what you get in a typical PC, but that's still quite a lot.

You have cross-trained some of your best Java experts on Android. They know how to engineer big Java projects. But you find unexpected problems. Why don't the same engineering approaches that work for your Web projects work for Android?

The reason is that Android is, still, a clever little OS for clever little devices. Android may run on flagship phones, but it will also, still, run on lo-spec devices with remarkably good performance. It was designed for devices with modest resources. More specifically, and although the minimum specs for Android have gone up-market since then, it was originally designed to enable multiple Java runtime instances run on the same kind of hardware Blackberry was running on 10 years ago. Many of those design elements still permeate Android's architecture and color the engineering approach needed for Android.

Many VM instances in a small system

Android enables a multi-process, multiple runtime-instance userland and middleware layer to run on top of a Linux kernel. If you have ever started multiple Java VM instances on  a PC and seen the resulting performance hit, this sounds like the opposite of a small Java OS for small systems, but it turns out to be the central paradox of Android. Android enables these characteristics by pre-loading classes in a "zygote" process, efficiently sharing memory with copy-on-write, limiting per-process heap size, dividing applications into small components with potentially short lives, and requiring all components to handle life-cycle events. There are cases where whole processes are "reaped" to free and compact memory. Even though Android's runtime environment implements Java language semantics, it is very unlike the environment your Web server code runs in.

Android is ill-suited to run the product of large software projects. Big libraries. Phalanxes of coders measuring their productivity in klocs. Layers of code. Abstractions. Utilities to help keep junior coders out of trouble. All these activities produce bloat, encourage monolithic applications, and hide problems. Especially so on Android.

Bigger heaps are not the answer

Android has an aggressive memory recovery strategy built deep inside the Android architecture. Is Android's architecture obsolete in an age of multi-gigabyte, multi-gigahertz flagship phones? The answer is No for at least two reasons:
  1. Android has a very wide range of hardware targets. For every flagship phone and tablet, dozens of lo-end phones will be sold. Apps will go into embedded devices in cars, home appliances, industrial computing devices, and non-phone consumer electronics devices.
  2. Java style garbage collection works only within one process. Android's memory recovery strategy implies that the larger the maximum heap size, relative to total RAM, the more often processes get all their components destroyed (and subsequently reconstituted in a new process) in order to recover memory system-wide. There is a sweet spot for maximum heap size and it's a relatively small fraction of total RAM.
Moreover, Android apps have to work across the whole range of heap sizes in use for the Android devices targeted. Limiting your app to high-end devices just to accommodate your engineering approach isn't a solution.

Were Android's architects just messing with you?

Multiple (dozens!) of processes, each an instance of a Java runtime, each with a limited heap size... is this an environment created specifically to be mind-bending to the typical server-Java engineer? You may very well think so. However, where Android imposes constraints in some dimensions, it offers opportunities, and novel features in other ways: Android has an unusually rich architecture for inter-process communication and remote procedure calls.

Write less code, in smaller modules, and embrace the Android archiecture

The way to avoid unpleasant discoveries, such as Android's per-APK method limit, heap size limits, etc. divide your software into a suite of separate but communicating modules, and divide those modules among multiple APK files. Moreover this is an opportunity to structure your projects along these lines: Smaller projects, smaller teams, and abstractions that follow the contours of the Android system. Avoid libraries that are redundant functionality provided by the Android OS. If the choice is portability-with-bloat vs more porting effort, the porting effort will be the cheapest investment you make to fight bloat.

If you have data that multiple modules might need, use ContentProvider components to share it. If you are rendering large images, say for an e-reader, do that in a process and heap dedicated to that part of your application. Break down your monoliths, and don't try to subvert Android's limitations.

Facebook's approach to unbundling functionality makes Facebook's products more visible on mobile devices. Keeping your apps simple, relatively small, and having multiple cooperating apps isn't just smart engineering, it's better for the user and better for you commercially.

Thursday, July 10, 2014

Telirati Analysis #14 Google's Social Menagerie and its Android and Web Habitats

N.b.: The tables in this article are available as a single table here

Welcome to the menagerie

Who knew how social Google really is? Google has at least nine properties that can be considered "social." Back in our analysis #11 we took a quick inventory and found that social characteristics permeate Google's collection of tools and applications, and that Google+ missed some key opportunities in high-value areas.

Here we expand our collection of specimens and apply taxonomy to Google's ecology of things with social characteristics and how well adapted they are to Web and Android habitats. We divide the analysis into two parts: content characteristics and social features.

Content characteristics

In addition to search, email, and office productivity, Google runs at least nine "applications" that deal in user-provided content. We can't call them "Web sites" since most of them are presented through both Android applications and a Web user interface. But the extent and quality of this presentation is unequal and uneven. The content varies by media type and long form/short form characteristics. The intended persistence of the content also varies, though persistence often really means "ease of discovery" which can diminish quickly if a chronological update stream is the principal means of discovery.

Content Characteristics
Google+Web-first, Android app has a subset of functionalityShort-ish form, longer than twitter, shorter than a blog postShort persistence - blink and it's gone
OrkutWeb-first, has a really bad Android appShort to medium formShort persistence - blink and it's gone
YouTubeWeb-first, Android app has a subset of functionalityVideoMedium to long
PhotosNear-parity between Web and AndroidImages, storiesLong
DriveWeb-first but Android at near-parityLong formLong
BloggerWeb-first, Android app has a poor subset of functionalityMedium to long-form, essay-like articlesLong
SitesWeb-only (duh!)Multi-page, long persistenceLong
GroupsWeb-onlyShort to medium formMedium to long (pinned)

Social Features

You can think about managing social applications as an exercise in applying common tools along common axes of characteristics and functionality. Does an application lack something basic like an update stream? Even code-oriented applications like JIRA and Github have update streams. If Google Code deserves to survive, it will have to catch up to competitors by implementing those social features.

Across all properties with user generated content and social characteristics, Google can gain an advantage over competitors by applying common social tools, such as an update stream, threaded comments, and user management features with a more-uniform level of sophistication. This means breaking up monolithic implementations and applying the modules to applications. An obvious example would be to make Blogger the long-form presentation variant for content that's too long and too persistent for Google+. But this principle can be applied to applications as varied as "social coding" and Web site hosting.

Social Features
Update streamDiscussion and commentsUser and group management
Google+Big, fast update stream, manageable with circlesReplies, but not branching threadsFlexible circle management integrated with Google address book
OrkutFast update streamLinear comments Friend list
YouTubeWeak Linear comments, now integrated with Google+ commentingCan follow users/channels; Users frquently maintained separate identities for YouTube
PhotosFull (too-tight) integration with PlusPlus commentsPlus circles
DriveNoneCollaborative editing, comments embedded in documents, document sharing in hangoutsDocument sharing
BloggerWeak. Can "follow" blogs, but it's a different "follow" than in Google+Weak, but optionally integrated with Google+None
CodeNoneIssue tracking and discussionEverything public
SitesNoneWiki-like comments and attachments are optionalSites can be private to a group
GroupsChronological view of posts, post-by-emailRich threaded branching discussionsGroup management not integrated with other Google properties


Each of Google's properties needs to be adjusted in different ways to fit into a grid of social characteristics and tools that extract maximum utility for those characteristics. Google is well-placed to do this, and a rational, consistent approach is what Google often, but not always, converges on in the end.

Google+Disaggregation into separate photo and social update apps
OrkutWill be EOL'ed in September
YouTubeAutomatic integration of videos into the Plus update stream, consolidation of follower features with Plus
PhotosNeeds a little more separation from Plus, a separate update stream of shared images
DriveDocument updates should be posted to an update stream visible to
BloggerIntegration of long form content discovery with Plus, perhaps in the form of a seprate content stream
CodeNeeds to be EOLed but for the huge potential to show what "social coding" could be like in the googleverse.
SitesNeeds a feature upgrade to make it competitive with first-tier Web site building tools like Squarespace
GroupsAdding "pinning" to Plus could integrate all of Groups functionality into Plus


Google needs to apply disaggregation, combination, and cross-domain implementation of a common set of social functions. Photos and YouTube are media-specific views into a common social update stream, but with variants in presentation that account for discover-ability requirements. There are also opportunities to combine the way some data is stored. Groups, for example, may boil down to a variation of how Google+ content is acquired and presented.

It wouldn't be worth doing without an  upside. The measure of the upside is to be found in the fact that many domains that seem unrelated can all be enhanced using social media tools. A deeper analysis of this form should be able to tell Google's product managers what to keep and what's in a hopeless competitive situation, and, if something is worth keeping, what are the resource requirements to make it first-rate.

Monday, June 23, 2014

Telirati Analysis #13 Missed It By Thaaaat Much: Why Chrome OS Athena isn't Chrome OS for Tablets

It's hard to drag a legacy UI into a touch world

As Microsoft well knows, it's hard to drag a legacy UI system onto touch devices successfully. Microsoft tried numerous times, notably in the past for the UMPC format and Windows mobile, and more recently in Windows 8.1 to evolve Windows into a touch OS. or to bifurcate it and leave the legacy UI behind. Neither approach worked.

Touch and the browser

Microsoft has, however, dragged OEMs into building touch laptops. Google acknowledged this trend by creating the Chomebook Pixel with a touchscreen. Google felt they needed to "own" the issue of touch and Chrome in case touchscreen PCs were successful. Chrome OS and the Chrome browser, as well as Google's Web apps would have to adapt to a world with lots of touch systems running Windows. Chromebook OEMs are following Google with their own touchscreen products in laptop form factors. But that never happened and touch in Chrome OS has become a stagnant and underdeveloped capability.

Dubious ergonomics

It's an open question if laptops are any good for touch. The screen is an awkward distance from the user. The laptop hinge can't resist the pressure of touch near the top of the screen. You can't pick up a touch laptop and wield it like a clipboard. Software aside, there are good reasons for the idea of a touch laptop to fail.

An accidental feature

With a track record of failure and obvious ergonomic issues, touch in PCs is a kind of accidental feature. It propagates among products via weak product management that's fearful of competitors' bad decisions. Nobody got fired for aping a larger competitor. So touchscreen laptops, and even large touchscreens in desktop all-in-one PCs ripple through the industry like the echo of a Rick-roll.

Athena is coming

Now that Windows 8.1 has landed with a thud, and OEMs are kvetching about being misled by predictions about the success of touch, there are some signs that Google has decided to take another shot at touch in Chrome OS. Or, perhaps, the Chrome OS developers are just tidying up loose ends in the Chrome OS touch interface, of which there are a considerable number at the time of this writing, based on a search of open issues mentioning "touch."

As of this writing, Athena is the next version of Chrome OS. Chrome OS is developed in the open, with the issue tracker for the project open for anyone, friend or foe, to examine. This enables speculation by pundits on topics like touch. But it also enables a deeper examination of the Athena feature set.

More than touch

In addition to touch, Athena is a further evolution of Chrome OS's UI stack and desktop. Under every "Web OS" is a limited window system that enables menus and title-bars and an inevitably ever-growing set of UI widgets. Athena ads support for a card-like UI in the widget set that, in turn, supports added UI features in Chrome OS.

Indeed Web UI, in general, needs to be evolved toward app-orientation at least as much as touch interactions need evolving. One reason mobile apps have taken off and outrun Web apps on mobile devices is that native UI is not just more touch friendly, having been designed to be so, it is generally more amenable to direct manipulation and other GUI convention and design idioms.

Chrome will continue to adhere to browser standards, but, within those standards, will adapt itself to making it possible for apps to better-integrate individual app experience with Web browser UI.

Evolution, not revolution

As much as improvements in touch and general UI interaction are needed for Web apps to catch up to native mobile apps, those improvements will be the result of evolutionary change. It is unlikely, for example, that Athena will result in Chrome OS tablets. more likely is it that Athena will make Chrome OS better suited to touch laptop form factors, which are themselves an evolutionary step on their way to some other destination.

The lesson for developers in this is that Web and native platforms are of roughly equal importance, though hardly any developers act as if they truly believe that. Web interfaces continue to outrun mobile interfaces for features, because that's what backend designers design for, and mobile apps outpace Web apps for direct manipulation, fine-grained control, and concise, consistent interaction.

Neither Web nor native are going to dominate mobile applications within the next 5 years, never mind consolidating a new paradigm for laptop and desktop UI. Implement for both native and Web UI as if they are your primary platform.

Saturday, June 21, 2014

Telirati Analysis #12 Intel and Rockchip: Why Intel Isn't Inside Your Phone

SoCs are mostly the same

One of the mysteries surrounding the question of why Intel hasn't gotten inside your phone is that systems-on-a-chip (SoCs) are all quite similar. Some CPU cores, a GPU, and i/o peripherals share a bus, all on one chip. If high-level architecture is so similar, and apparent barriers to entry low enough that numerous new entrants have flourished, what's up with Intel?

A combination of factors

If new entrants can get in the game, and SoCs are difficult to differentiate, it is unlikely that you can find one factor, or one problem to solve, that will unlock the mobile market for Intel. Here are some of the factors cited over time as keeping Intel out of your phone:

Power management: ARM CPUs were designed for low power applications and ARM licensees have all the expertise in integrating systems that result in low power SoCs. Intel has largely overcome this barrier, though it took more product generations than it should have, and Intel may still trail NVidia in bringing big GPUs to low-power SoCs.

Windows and netbooks: Netbooks are relatively higher margin products, Intel knows all the netbook OEMs, and netbooks have bigger batteries and can tolerate higher power consumption. The Intel instruction set architecture and Microsoft's Windows technology strategy kept ARM-based competitors out of the mainstream Windows 

GPUs: Intel hasn't brought their own GPUs into low-power SoCs. Currently, Intel uses Imagination Technologies (IMGTec) PowerVR GPUs in Atom chips. There may be an IP licensing price mismatch between IMGTec and the kinds of OEMs who might first bring an Intel SoC to market in their device. There is no open source driver for PowerVR GPUs, which limits the experience Linux developers have with that GPU. That said, PowerVR is in Apple's mobile products and many other ARM-based SoCs. There is nothing apparently deficient about that GPU architecture. IMGTec recently acquired MIPS and looks like they will get in the CPU licensing game.

Performance: Benchmarks for Intel Atom CPUs have always been competitive with ARM CPUs, even though the fastest ARM CPUs, like Apple's A7, outperform the top Atom CPUs. So raw performance numbers can't be the whole answer. The rise of modestly priced, medium-performance products like the Motorola G further reduces the importance of ultimate performance. Toolchain issues are also unlikely to holding Intel back, so while one can conclude that performance is part of why Intel is not in your phone it is not likely to be a major part. Performance may become a larger factor in combination with GPU performance if NVidia is successful with the K1 chip. General-purpose GPU computing is another factorin the performance equation. Renderscript is a little-known and little-used technology that 

Price: The question of price by itself embodies the complexity of why Intel has no significant mobile wins. ARM licenses their designs, as do mobile GPU designers. That has proven to be a powerful business model. Samsung and Apple are vertically integrated, integrating their own SoCs. ARM SoCs are often fabbed by contract manufacturers, except for Samsung, which operates their own fabs and also makes Apple's chips. And despite the vertical integration at the top of the market, there are low-cost fabless ARM SoC vendors. While prices are obscured by the difficulty of measuring transfer pricing in large users of CPUS like Samsung and Apple, you can conclude that diversity of supply means intense competition and that Intel has a difficult time convincing OEMs that they will be the low-cost vendor over the long haul.

Intel and Rockchip

It may seem hard to figure out why Intel entered into a relationship with Rockchip to produce SoCs. But it's a hard multidimensional problem they are trying to solve. Getting an experienced maker of low cost ARM-based SoCs that's got significant design wins involved is an approach that could solve more than one problem at a time. It's more alchemy than science, but Intel has much to gain and nothing in their history that would be get worse if this approach doesn't succeed.

One factor that isn't part of the answer is investment. Intel doesn't need Rockchip's money and Rockchip isn't taking any Intel money, expecting to make back their cost of development through selling this chip through Rockchip's own channels.

Both companies will sell the product, though it will be branded as an Intel product. This is significant because Rockchip has mobile handset customers and Intel doesn't. 

Another factor that can be eliminated from the answer is that Intel will license SoC CPU cores. This isn't an intellectual property license agreement in the same way that ARM licenses their designs, and it isn't being replicated with other SoC makers. Nor is it a foundry agreement. Rockchip is fabless.

IMGTec's venture into CPUs may also have motivated Intel to find a partner that can integrate GPUs other than PowerVR. Rockchip currently uses Vivante GPUs. But this does not necessarily augur a long-term relationship. Especially so if the next step on the road map is the integration of Intel's own GPUs into mobile chips.

Another indicator this may be just a bridging partnership is that the planned SoC will have a 3G radio. That may not be competitive is markets outside China and developing world markets, which are Rockchip's current target markets.

Not to say that Intel is exploiting Rockchip's customer base utterly without a road map for future collaboration, but it does appear that Intel sees this as an experimental way to acquire first customers in the mobile handset domain.

Strange bedfellows, and a limited relationship that leaves the future open for reconsideration. But this may be the way we get to see Intel-powered handsets in China. It may also be the way Intel learns to be competitive in a domain that has eluded them for complex reasons.

Telirati Analysis #11 Diagnosing and Fixing Google's Social Problems

Photo by Matthew Hester (CC BY-ND 2.0)

Google has become something of a slumlord. While Google+ has been accused of being a "ghost town," at least it looks pretty, even now after what feels like a long period of stagnation and un-addressed bugs. But Google+ isn't the most neglected neighborhood in Googleland.

Where is the Blogger blog?

Various parts of Google, notably Search, use Blogger to convey news about new releases and their development road map. Blogger is key infrastructure for Google itself. But where is the Blogger blog? Like some other semi-abandoned properties, Blogger no longer has frequent updates about features.

If you want a history of Blogger and it's features, you'll have to rely on Wikipedia. Evidently there are ardent Blogger users who keep track of these things.

Why does this matter? There are a lot of abandoned places on the Internet. Blogger, however, is emblematic of the problems that precipitated the departure of Vic Gundotra from Google+. Blogger and Google+ have a very valuable natural symbiosis. Potentially, Blogger and Google+ could unify long-form and short-form writing on the Internet. But, in reality, the execution fell short. The Google+ comment thread system was grafted onto Blogger, which needed a replacement for its own weak and spam-riddled comment system. But this falls far short of actually solving the problem of creating a unified space for discovering posts and discussions of varying complexity and lifespan.

As we will see, this isn't the only missed opportunity.

Let's make a Group and discuss it

Yes, there is another social, post-centric, discussion thread tool at Google: Google Groups. It's easy to see why Google Groups is still used: You can get a concise listing of threads, organized by topic, with some particularly timeless threads "pinned" to the top of the list. Just the thing for discussing, among other things, features users want to see in Google products.

In 2013, the Groups announcement blog announced there wouldn't be any more announcement blog posts because they had become too rare. So we are left with a situation where I can't get a threaded view of a discussion on Google+ and I can't get "bell" notifications from Google Groups. Slummy.

2009 Called...

Sites is another abandoned-looking amusement park in Googleland. Stagnant functionality. Outdated templates. No blog. Despite Google having properties like Wallet, there are no easy e-commerce options for Sites. It's just a middle-of-the-pack Web site builder with a wiki-ish flavor to it. Web site builders is a field in which there are less than a handful of first tier players. Therefore being "middle of the pack" means you're not in good company.


We got this far, without mentioning Orkut, a social network operated by Google, with minimal integration with other Google properties (e.g. no "bell" in the strip at the top of the screen), that has a Turkish name and is big in Brazil and big-ish in India. With presumably limited resources, the look and feel of Orkut remains charmingly MySpace-like.

If Facebook is expanding internationally, how does an opportunity to integrate an international audience into Google+ keep floating out of reach?

YouTube is like Instagram for video, and should have been treated as gently

YouTube is a social network. But here Google should have taken a tip from Facebook. YouTubers are mostly not Plussers, in the way that Instagram's hipsters are not Facebook schoolgirls. Facebook had the good sense to leave that situation alone. Instead, Google+ enforced strong identity on a trollish population. It's easy to see why that would be a thankless task, and a dumb move in an attention economy where "thanks" is the currency.

It's the product management

Product management in technology companies is a distinct function that sometimes gets rolled up into program management, project management, or even engineering management.  But many of the above listed problems are the result of product management failures. The product manager is sometimes called the "product owner." He is meant to make the product competitive against specialist competitors, and balance the interests of "his" product against the sometimes abstract advantages of integrating with other products under the same roof.

In the egregious cases listed in this post, even the basic functions of competitive analysis seem to have been lost. That is either a product management failure or a senior management failure in deciding to keep products while starving them below a competitive level of resources. Lackadaisy or willful neglect, or both, are what turns some products into Internet "slums."

Let's take a property inventory:

Let's do a little product management exercise and take an inventory of what Google owns, and the relevant characteristics. We'll leave Orkut out because it's largely redundant  in every characteristic to Google+.
ContentPersistenceUpdate streamDiscussion, commentsUser. group, management
Google+Short-ish formShort persistence - blink and it's goneBig, fast update stream, manageable with circlesReplies, but not branching threadsFlexible circle management integrated with Google address book
BloggerLong-formLongWeak. Can "follow" blogs, but it's a different "follow" than in Google+Weak, but optionally integrated with Google+None
SitesMulti-pageLongNoneWiki-like comments and attachments are optionalSites can be private to a group
GroupsShort formMedium to long (pinned)Chronological view of posts, post-by-emailRich threaded branching discussionsGroup management not integrated with other Google properties
YouTubeVideoMedium to longWeak Linear comments, now integrated with Google+ commentingCan follow users/channels; Users frquently maintained separate identities for YouTube

What does this table suggest? The main problem that jumps out at you from this table is that comment integration is weak integration. It doesn't deliver a lot of value and it doesn't solve the key problems for user-generated content. Meaningful, useful integration that improves the user experience must be deeper integration. All these properties have social characteristics, but comment integration is peripheral to social-centric services.

An MVC model for social properties

With Orkut, Google has six social properties, at least, before we get into things like social documents and social coding. With Google+, social became a pariah within Google for taking on the wrong integrations with the wrong products in the wrong order. Comment integration was weak, and with it came identity integration that was, in the case of YouTube a source of deep dissatisfaction. It's bad form, and just plain a bad idea, to go an mess with another product's community.

Integration should have started deeper, with the content database and update stream. Then with a gentle, optional, merging of identity, and adding the group-management features of Google+ Circles. Comment integration is then a minor issue. Each product can retain a distinctive set of views outside the chronological fast-flowing river of the update stream, thereby both merging and keeping distinctive the persistence characteristics of each property. You can think of that as multiple views into unified content, or a "model/view/controller" (MVC) approach to viewing the same model through multiple views.

The opportunity

Google's biggest opportunity is to create the best user experience for user-generated content across Web sites/wikis, discussion group, blogs, videos, images, and the social update stream, and along with the content continuum, integrate (carefully!)the management of participants and their comments and collaborative participation. Almost everything has a social aspect, but Google somehow missed the deeper insight once Google+ latched on to comment integration as the feature it would take across properties.

One more thing

One more thing about Google+ and integration with other Google properties: Android. GMail is great on Android. The GMail Android interface is richer than the Web interface. Maps is magical on Android. Keep is simple but awesome on Android. And what's the big thing in social networks now? Mobile. Google+ is mediocre on Android. That's not how to make a product succeed at Google.

Telirati Analysis #10 To Change the Terms of the Privacy Debate Protect All Bits

The trust problem

US technology and Internet services companies have a deep trust problem. They are accused of collaborating with the NSA and, on top of collaboration, being exploited by the NSA. The NSA, in turn, is seen as operating without boundaries, turning America and much of the world into a glass-walled panopticon, devoid of privacy and confidentiality.

This loss of trust has already cost tens of billions, and will cost tens to hundreds of billions more in lost sales outside the US and the "Five Eyes" nations most closely collaborating in NSA surveillance. Any nation that aspires to have practical sovereignty, competitive industry, and independent decision-making finds they cannot trust US technology and services.

Solving the trust problem is one of the most valuable goals in the technology and Internet services industries, and it has proved to be a sticky problem. The key may be to change the terms of the discussion.

Describing the threat

The NSA has taken most of the headlines, but it isn't the only threat to privacy. Without understanding the whole threat, some people may conclude that they trust the US government and/or the NSA, and have "nothing to hide."

This approach ignores the non-US state actor threat and the criminal threat to data and communications security. In corrupt places, the criminal and state actor threat are combined, and there is nobody to trust. Where laws mostly work, they offer only variable protection, and none offer absolute protection against the state, and no laws restrain foreign threats.

The bottom line is that you can't rely on a service to protect you, and you can't rely on laws to protect you. You have to protect yourself.

The role of technology and service providers

The key to regaining trust is to enable individuals to protect themselves. The role of technology and service providers in this is to support individuals' ability to protect themselves. Trust can't be regained directly. It must be earned back by providing tools for privacy.

Tools for individuals

To earn back trust, technology and service providers have to enable "end to end" security that is fully controlled by individuals and enterprises. Some say this would be hard to use, but services like Skype provided a high level of security while growing on the basis of the best ease of use in their product category.

There really are no excuses for not enabling individuals to have simple access to privacy and security, and the ability to deliver high security and ease of use have only improved since Skype was introduced. For example, a "web of trust" removes the need to trust an authority to anchor a chain of trust in the identity of the person you are communicating with, and the validity of their public key.

Social networks provide a means to distribute public keys. Ephemeral keys and "perfect forward secrecy" (PFS) remove the need for individuals to manage keys for real-time communication.

A solution for individual privacy and security must include these elements:
The public has to be confident their software does not include back doors. It has to employ simple-to-use technologies where possible, and it has to make the more-complex aspects of security as simple and powerful as possible. These goals are within reach of all major Internet services and technology providers. By reaching these goals, technology and service providers will earn users' trust.

The effect of end-to-end encryption and related technologies that remove the need to trust the operators of networks and the equipment makers who built it is to reduce the value of mass surveillance. By encrypting all personal communications and personal data, for everyone, all the time, the cost of extracting that information by more powerful tools becomes impractical to apply at a mass scale.

What is a sufficient solution?

Protecting against a sophisticated state actor threat is a daunting task. The NSA actively subverts security technologies. The public can't verify proprietary security technologies. Security agencies worldwide stockpile vulnerabilities and buy them from hackers across the black-to-gray spectrum.

But protecting privacy isn't an impossible task. The ability of state actor and criminal hackers to take advantage of vulnerabilities is limited by independent discovery of the bugs enabling those exploits.  The lifespan of most vulnerabilities is in the range of a few months to two or three years. Many vulnerabilities are only suitable for targeted attacks and cannot be scaled-up for for mass surveillance without being quickly detected and fixed.

Defense against vulnerabilities must be defense in depth. Vulnerabilities will never all be fixed. Other tools, like intrusion detection and postmortem analysis tools need to be developed in the open so that they can be trusted to work against all classes of threats. Enterprises that make use of open source software should form cooperative organizations to test and audit that software and fix vulnerabilities.

A sufficient solution consists of:
  • Finding vulnerabilities and reducing the number of vulnerabilities
  • Detecting threats and intrusions
  • End-to-end encryption of all data and communication

The most valuable secret of surveillance is that it mostly depends on data being weakly defended and available in cleartext. If all data is encrypted end-to-end and never available in cleartext except at the intended recipient's system, and all systems are secured to a high standard, we can have privacy, confidentiality, and security in communications and data storage.

America's blind spot

Americans, and even American corporate leaders with plenty of international exposure now have, and are likely to continue to have a blind spot regarding the severity of the trust problem they face.

Snowden's files, and subsequent developments such as the allegations that NSA knew of and exploited the Heartbleed bug, have put the US government and the US-based technology industry in disrepute worldwide. 

You might think a problem that large would have set off alarms. But the response of US equipment and services companies has been timid: Some have issued some indignant press releases. Some have participated in proposing reforms that have so far failed to fill even a teaspoon full of the credibility hole. Some have touted wider use of SSL, while retaining access to your data in cleartext. So far, the only major Internet service to have even floated a trial balloon by means of a trade press rumor is Google, who are said to be considering implementing end-to-end encryption for GMail.

Many Americans, even those with exposure to and experience in international markets live in an "America Bubble." This bubble is made of kind assumptions about the American government: The NSA and FBI protect us. They catch Bad Guys. Some of what they do is a direct service to American businesses: catching credit card frauds, for example.

The fact is that spy agencies and law enforcement have numerous tools other than mass surveillance. Among Snowden's revelations one finds that the US government has extraordinarily subtle listening devices and transmitters available for high-value cases. Ending mass surveillance won't take away from these high-value tools.

The only way to win is to not play the game

It has been a year since the Snowden revelations, and US technology companies have not taken the required steps to regain trust. At both the national and industry level, the only way to regain trust is to not play the conventional game of laws and treaties and weakly protective technologies. By securing users' data against all threats, the terms of the negotiation are changed and the current deadlock can be broken. State security apparatuses will only re-think mass surveillance in an environment where mass surveillance is less valuable.

While many nations have surveillance operations in their state security mechanisms, some nations apply vastly more resources to these operations than others. The US, for example, spends more on its military than the rest of the world combined. Spending on the NSA and other signals intelligence is likely to be proportionate to military spending overall.

If some nations come to realize they can't compete with the NSA, they will then conclude they must change the ground on which the game is played, both to secure their sovereignty, and to secure the competitiveness and trust in their technology industries.

It is an open question when the US technology industry will take affirmative and effective steps to regain user trust, or whether the US will end up importing that approach from outside after a painful lesson in lost business. The cost to US industry is high and mounting. Likewise the frustration with the US among its allies, not to mention non-aligned nations is also mounting. Some nation's political and business leaders will say "Enough!" and decide that the best way forward is to provide people with the means to have privacy and confidentiality.

Technology alone cannot give us a system of laws, treaties, and security mechanisms that respects privacy, but, by making it harder and less valuable to violate privacy on a mass scale, technology can change the terms of the political debate, and steer it toward a better outcome. Not just for Americans, but for everyone living with a too-intrusive government.

Telirati Analysis #9 The Most Interesting Bug in Android

This might not be the single most interesting bug in all of Android, but out of the ones I have encountered or heard of, it definitely caught my attention.

Combating fragmentation with a single code base

A key set of features of Android is good forward and back-compatibility. That is, you can write an app that uses new APIs, but runs on old systems, by testing for the API level and not calling unimplemented features. This enables developers to keep a single code-base for many versions of Android.

However, old Android versions can't see their own future. That means that when new permissions are introduced, it is possible that an app created their own permissions with the same name. And therein is the basis for a vulnerability identified in this paper by Indiana University and Microsoft Labs researchers.

Bad Android!

The result is an app that never asked for a permission, but got one anyway. That's bad! It's not just bad implementation bad, it's design-flaw bad. On the other hand, there are a fairly narrow set of cases where this vulnerability can be exploited in practice and there is a workaround: If you uninstall and re-install apps, you will be presented with the permissions the app requests.

There is also a telltale that marks apps that are attempting to do this:

For example, the app can define a new system permission such as permission.ADD_VOICEMAIL on Android 2.3.6, which is to be added on 4.0.4. It can also use the shared user ID (UID) [17] (a string specified within an app’s manifest file) of a new system app on 4.0.4, its package name and other attributes. Since these privileges and
attributes do not exist in the old system (2.3.6 in the example), the malicious app can silently acquire them (self-defined permission, shared UID and package name, etc.). When the system is being updated to the new one, the Pileup flaws within the new Package Manager will be automatically exploited. Consequently, such an app can stealthily obtain related system privileges, resources or capabilities.

That means that in almost all cases, apps that escalate permissions this way intended to do so. That means Google can enhance Bouncer, their system for automatically detecting badware in the app store, to automatically detect apps that use this exploit. But it also makes the bug worse:

In the above example, once the phone is upgraded to 4.0.4, the app immediately gets permission.ADD_VOICEMAIL without the user’s consent and even becomes its owner, capable of setting its protection level and description. Also, the preempted shared UID enables the malicious app to substitute for system apps such as Google Calendar, and the package name trick was found to work on the Android browser, allowing the malicious app to contaminate its cookies, cache, security configurations and bookmarks, etc.

Now that's bad! This is one of the most interesting Android bugs I have yet encountered.

Permissions need a bigger fix

In addition to a fix, I think this bug should prompt a change in how app permissions are handled. I believe they should be revocable on an individual basis. That would help thwart "permission creep" as well as reducing the severity of bugs like this one.

Telirati Analysis #8 Who Makes How Many of the Things We Code For

File:Android robot.svg

Every year, Tomi Ahonen publishes an almanac of industry numbers and analysis. He has consistently pointed out that the way he compiles and compares the numbers has, for several years now, embodied the view that smart mobile devices are computers. Here we take a look at how treating all computing devices as a single market can change your perception of priorities when making software development resource allocation decisions. 

Tomi's Numbers

We have prettied up the numbers he published in his blog with some graphs. We think the idea that computing devices are not just traditional PCs, and programming, especially interactive programming, is applicable to all interactive devices is valid and we should visualize the industry in a way that captures that idea. The graphs in this post should drive home just how thoroughly smart mobile devices have changed what you should be paying attention to when you think about what to target when you write a program, or create any product for the modern computing market.

The Big Picture

First, let's see how many of each of the the three main types of devices are sold each year:
It's a big, big market. More than 1.5 billion devices that can run sophisticated interactive software are sold each year.

Smartphones dominate with about a billion devices made per year and a strong growth rate that will drive future dominance. But what's remarkable is how many tablets are sold even without them having begun to displace PC in workplace computing.  Here is how the main device types divvy up the market by percentage:
What begins to emerge is a picture of a world out of balance. People have shifted to mobile devices. These devices are powerful enough to run any interactive software, but important and highly productive categories of software, like line-of-business applications in the enterprise, are stuck on big, heavy PCs.

The Top 10 Makers of Devices

When you treat computers and smart mobile devices as a single market, some interesting things happen to your top 10 list. Yes, Samsung is that big. So is Apple. Lenovo is impressive, and expects to get bigger with Motorola, which, by the way, doesn't make the top 10. Neither do Toshiba, Acer, Asus, Fujitsu, etc. LG and Sony are both bigger than Dell, in unit volume. Sony makes a lot of PCs as well as mobile devices. Sony will drop in the rankings as they split off their PC business.

It's a huge market, with lots of players. Samsung sits in the place Nokia used to occupy when Nokia was half of the handset business. The top three makers dominate, but that understates Apple's dominance and technology coherence across their product line. Apple created and still dominates revenue for mobile apps and other mobile content.

You might think it would be difficult to enter this market. But it holds a strong allure to capital. 1% of a 1.5 billion unit computing device market is enough to make a big, profitable, and sustainable enterprise. Blackberry has some distance to fall before lack of volume or share are an existential threat to the company, but they also bear some overheads not shared by overtly me-too participants in this market. It takes money to maintain your own OS and ecosystem infrastructure.

Here is what the current situation looks like in terms of market share percentages:

The New Hegemon

That's all interesting, for those keeping score, but what does this say about the future of the software we make?
That chart is correct. Android is the operating system running 60% of all computing devices. Approximately 850 million out of 1.5 billion new devices that can run interactive applications that were made last year run Android.
Are you writing Android software? Are your engineers trained to write Android software? The New Hegemony has arrived.

The also highlights the value of technology coherency across a product line and ecosystem. Apple is likely to continue to grow and retain their industry-leading position. While Apple is able to fight back by offering developers a more effective and more remunerative way to reach customers, Microsoft lacks that coherency and is at greater risk of slipping out of relevance in markets outside their stronghold of enterprise computing.

Both Apple and Microsoft are, nonetheless, fighting a rearguard in terms of market share. Their share of operating system footprint will shrink over the next 5 years to the point where Android will occupy a similar if not identical place compared to Windows in the two decades where Windows held 90%+ share of PC computing.

For software developers that means making Android the first target for your software, and making Android the core of your in-house competency is a priority, now. That means that if you are not starting long-lead-time projects now, for LoB enterprise apps now, you won't be ready when Android reaches 80% of OS share.

The Battle for Wearables, the Enterprise, and the Next Billion

It's possible that Apple might find a new product domain that is out of reach of Android, and that Microsoft could defend their position in the  enterprise, but these hopes stand on shaky ground: Apple is unlikely to find that wearables are the answer to this problem, and Microsoft's failure to make a desirable Windows tablet robs it of a key weapon for defending enterprise seats. On the other hand, the "next billion" mobile device users, the vast majority of which are in developing nations, are very likely to find that their first smart mobile device runs Android.

Android is also the most open of the three main platforms. While it isn't developed "in the open," the Android operating system and userland are open source software, licensed under the Apache license. That means Android has found wide applicability in categories that have yet to make an impact on these numbers, but almost certainly are destined to both add to the numeric dominance and to cement Android's place in the platform world for the long term: Cars will run Android. Cameras already do run Android. There is a label maker that runs Android and a label-making user interface based on Android. Medical devices, beds, appliances... almost any interface that now has buttons and a display will soon be a touchscreen and behind most of those touchscreens will be Android.

So, while the peak near-total dominance of Windows, which at one point accounted for 98% of the origin of all hits on Web sites might not be literally repeated by Android, in large part because Apple is not the sitting duck it was back then, it is likely that 80% of interactive computing platforms will run Android, and maybe 90%+ at some point.

It's an Android world.