Minded Systems

Google’s machine learning coup on UI design

It’s been a couple weeks now since Google I/O 2016 came to a close and with very few tangible products released there has been a lot of time to come up with some crazy theories on what Google is up to. So here’s one of my left-field theories on where Google is headed. I believe that Google intends to use its machine learning to generate user interfaces for most standards apps.

This year a lot of focus was brought on the Google’s application of machine learning as well as the new ‘conversational UI’. Previous years there was a lot of “voice”; voice actions, voice typing, voice wakeup and so on. This also wasn’t the first year we’ve heard Google talk up its “machine learning” and how it has been applied to all sorts of products like Photos, Inbox, the Play Store and some thing they call Search. Applying machine learning to all of the audio and voice data collected over the years provides the foundation for an AI agent to provide a conversational interface.

So what happens if this same technique is applied to other user interface types other than voice? I expect the next move for Google is to take over the user interface experience using machine learning.

Apps have always had visual interfaces. We open an app and look at the interface to determine what interactions to perform. More importantly, people have been doing this with a LOT of apps for a LONG time. Undoubtedly Google has immense amounts of data on how common apps work and how people interact with them. Take social media apps for example. These are almost always list-detail structured apps. Facebook’s timeline is a list of posts and any given post can be opened to reveal more details like metadata and comments. Instagram photo streams, Twitter feeds and many more operate (more or less) in this way. This isn’t unique to the social media category either. The same scenario is visible in many other app categories; the nature of the data the app presents usually dictates its presentation to a certain extent. It stands to reason that various apps presenting similar types of data would present them in similar ways. Its entirely possible that Google will use its AI armed with machine learning about how we use apps to create app interfaces automatically based on the data provided to the AI for presenting.  Providing the AI data structured as lists within lists would give list-detail based UI where as a large flat array of singleton objects would yield a grid based UI.

From a longer term perspective, what about alternative interface possibilities? I am fortunate enough to have my eyesight still in tact and would not know the first thing about designing an app interface that would be “native” to someone who is sight-impaired. It’s quite common for autistic peoples to respond and interact through non-verbal communication like music. Again, I wouldn’t know the a thing about creating an interface for a device capable of being a musical interface. The app data being presented wouldn’t change, the method of “display” of the interface does change. Apply machine learning to teach the AI how to build interfaces for alternative interface output devices. Once AI has learned how to build interfaces for alternative output devices it should be able to construct a native interface from the same structured app data as the visual or conversational UI methods.

The much talked about Android Instant Apps will likely facilitate this. Android Instant Apps are designed to allow external sources like Google Assistant to access the app and circumvent the visual UI. Once apps have been configured with hooks to bypass the visual UI any alternative interface device could use these hooks to side-step the standard visual interfaces when “opening” apps.

Please provide a valid email address

Trackback URL

http://www.minded.ca/2016-06-09/googles-machine-learning-coup-on-ui-design/trackback/