The future of software engineering: More devices, more data, smarter experiences
December 8 - 23
Early in 2023, Apple unveiled the new Vision Pro, their latest innovation in the consumer devices space. This continues a trend set in motion over 15 years ago with the iPhone, from which point we all started carrying around a sensor-filled device that enabled rich experiences. Around the same time, Internet of Things (IoT) devices, driven by cloud computing, became part of our everyday lives. At Mirego, we believe that this trend of new sensor-rich devices producing ever-increasing amounts of data, and thus enabling brand new possibilities and experiences, will only accelerate in the next few years. What are the impacts of this trend? How are we preparing to leverage those new opportunities? Let’s explore the topic.
A broader spectrum of of devices
With computing and networking capacities have drastically improved over the last few years, it has been made possible to build devices that can gather, process, and share larger amounts of data. Expecting that this trend will continue, we can safely assume that we will be able to create better, more immersive real-time experiences that significantly reduces the gap between the digital and physical worlds.
Today, our smartwatch can provide all kinds of health-related information, such as our heart rate, our sleep patterns, and much more. Athletes now train with a plethora of sensors attached to them, enabling fine-grained optimization of their performance and providing advanced analytics to coaches and fans alike.
Who knows what tomorrow will look like? We expect some form of headset to become ubiquitous, and with it, digital products will be able to collect tons of data about our movements and our surroundings. We would also not be surprised to see open protocols emerge collecting data from shared sensors, in real-time. While open data accessible through APIs is commonly used in today’s applications, there are not that many cases where nearby sensors can be seamlessly accessed to enrich applications. Of course, there are numerous security and privacy issues to tackle in order to make such experiences possible, but they should not be insurmountable in the future.
AI will change the way we handle data
It is impossible to write about the future without mentioning the potential impact of artificial intelligence. In this case, the new devices and the sheer quantity of data harvested will be used to feed machine-learning algorithms, which in turn will enable new capabilities. We expect to see more AI-powered, intent-based features in tomorrow’s applications, where users will state what they want to do, without necessarily knowing how to accomplish the task. A good example of this is voice commands, which are already becoming ubiquitous. Most devices can now be controlled by voice, whether it’s your iOS device with Siri, your Android device with the Google Assistant, your car with CarPlay/Android Auto, your TV or even your gaming console. The next step in seamless command inputs appears to be eye-controlled commands, which will be the primary input for the Vision Pro.
With the vast amounts of data being produced and shared, the strain being put on computing and networking infrastructures could very well become a real issue. This is why processing at the edge will gain (or regain?) popularity. This is already gaining traction, most notably in AI, where lots of effort is being put into running models directly on devices. This can be great for privacy concerns, and it also helps reduce the network load. We expect to see more and more optimization of where processing happens, resulting in less data having to travel back and forth between clients. Most of the processing could be done on the client or even on IoT devices.
Working on what’s next
As developers at Mirego, our goal is to prepare for the future, without risking it all on early assumptions. One of the things we have started thinking about is designing software with the user intent in mind rather than simply thinking about a list of features. How can we enable the user to accomplish a task, without being too specific with how that task will be performed? Will it come from a voice command? Will it come from a chat with an AI assistant? Will it be very specific (I want to watch John Wick 3) or vague (I want to watch a recent action movie with great reviews)? While it is clearly not just an engineering challenge, this is something that developers should keep in mind when architecting their apps. It also has the added benefit of making our apps more accessible, which is something that has become an important concern over the last few years.