At Build 2018, Microsoft's vision of AI and the Edge

"The world is a computer." So says Satya Nadella, CEO of Microsoft as the company opens its Build 2018 developer conference today. It's an important time for the software giant, buoyed by the growth of the cloud but looking to extend that success as needs both niche and wide-spread get more and more demanding. That means more security, more storage, and more devices.

For Microsoft it's an opportunity to discuss one of its new obsessions: edge computing. The goal is more silicon in more places, from the infrastructure making up smart cities, to devices as small as the mouse on your desktop. It's also an opportunity to mesh that with another of the company's fascinations, artificial intelligence.

To say Microsoft is gung-ho about AI is an understatement, but edge computing will help bring it out of the cloud and distribute it more broadly. A key part of edge computing's value will be its efficiencies in AI inferencing. That is, taking a trained model and then deploying that more broadly.

Projects that currently rely on transferring data to the cloud, waiting for it to be processed there, and then having the results returned to them will, with the growth of edge computing, be able to do more of that locally. That might be object recognition, or translating languages to text. "In the next 10 years, billions of everyday devices will be connected," points out Frank X. Shaw, Microsoft's corporate vice president of communications, "smart devices that can see, listen, reason, predict and more, without a 24/7 dependence on the cloud."

So, broadening Microsoft's footprint in edge computing is a big part of the push at BUILD this year. The Azure IoT Edge Runtime is being open-sourced, and joined by the first Azure Cognitive Service for the edge, Custom Vision. That will open the door to AI algorithms that an listen, see, speak, and interpret, all without demanding remote processing.

It's not just theory. Microsoft is working with DJI on commercial drones that can integrate standalone AI functionality. Baked into a DJI Mavic, for example, the drone could fly over seemingly identical pipes but use its onboard artificial intelligence to photograph them, identify damage, and then publish its survey report to Microsoft's 365 platform.

The same technology could be used to help farmers monitor and plan their agriculture and crops, or public services relying on drones to check on crowd safety and more. Another deal, with chip-maker Qualcomm, will see a new vision AI dev kit running Azure IoT Edge. That'll be used for camera-based IoT solutions, running machine learning and cognitive services downloaded from Azure and run locally on the edge.

Microsoft isn't relying completely on partnerships for its new platforms, mind. Today the company is also announcing Project Kinect for Azure, packaging up the same advanced camera and tracking technology that we've seen used with Xbox and HoloLens, and integrating it with standalone processing power for localized AI.

It's a fairly pivotal week for Microsoft. The company finds itself sharing headlines – and developer head-space – with Google, whose I/O conference begins tomorrow. Both are setting out their stall as the linchpin for artificial intelligence and the cloud; each has an argument for why its implementation of machine learning and more should be the one that coders devote their limited time to. Which argument will hit home remains to be seen.