- Blog
Native vs Hybrid Applications : The Final Argument
April 21 — 2017
I get asked this question again and again. Although it’s a seemingly simple question, the answer is far from straightforward. Deciding whether to go native or hybrid requires delving into more complex questions and analyzing each factor in depth. Let’s start at the very beginning and go through this critical process together.
What’s a native application?
Before I can even get into why native can be beneficial in certain circumstances, it’s important to understand the underlying framework that defines a native application. Here’s my definition of it:
A native application is written in way that maximizes and optimizes the device’s software development kit (SDK). It takes full advantage of the features offered by the device’s core functionalities and uses the operating system efficiently to provide the best user experience possible.
It’s not so much about programming languages as it is about being able to optimize the platform itself to allow users to enjoy the integrative features at their best. In contrast, hybrid applications use HTML user interfaces and access the native SDK via Javascript. These apps don’t qualify as native because they rely on HTML to bridge the gap between the user interface and the core device functionalities. Consequently, they aren’t maximizing the device’s capabilities, which can result in inefficiencies and a shakier user experience.
Why should I care about native?
Okay, so let’s say that I’m a foodie. (Trust me, that it’ll all make sense in a minute) As any respectable foodie out there, I love to cook. However, one ingredient I don’t like to use when I cook is tofu. Don’t get me wrong; it’s not like I can’t whip up a deliciously savory tofu dish, it’s just that I feel like it’s not the best option for me. You see, for tofu to reach its optimal condition it requires a significant amount of time and effort. On the other hand, with next to no effort, I can take a decent cut of beef and enjoy its flavorful components in its natural state. The same analogy applies to hybrid applications (aka the tofu) and native applications (aka the beef). Investing little effort into a hybrid application results in an “okay” product, whereas pouring a lot of time and effort into the process will often result in a great product. All things considered, native applications provide optimal conditions and streamlined efficiency with minimal effort.
Deciding whether you want to go native or not doesn’t solely rely on capabilities. There’s plenty to be done with hybrid applications, too. At the end of the day, it all depends on the amount of effort and time that you want to put into optimizing the user experience. A good example of this dates back to 2012 when Facebook’s mobile application went from being a hybrid application to a mostly native application; a noticeable move that was applauded by the tech industry and users. A year later, LinkedIn chose native over HTML5 to improve its user experience as well.
Does native application development cost more?
Generally speaking, when comparing similarly finished products, native and hybrid applications can fall into the same price range. Hybrid application proponents assure that you’ll write the application once and that it will run accordingly in any given circumstance. In the late 1990’s and early 2000’s Java made the same promise. What they should have said, was “write once, debug everywhere.” The reason this promise couldn’t be kept is simple: every operating system version had (and, still has) its own particularities. While Java attempted to streamline the way operating systems work, it ultimately failed.
When writing hybrid applications you can expect to encounter the same issues as experienced with Java. HTML, CSS, and Javascript also promise that you’ll arrive at the same result, everywhere. (Haha! I wish!) If that were true we wouldn’t have a browser war and we wouldn’t have to detect features. For instance, you wouldn’t have to analyze whether your browser supports specific CSS animations. Different browsers on different operating systems don’t function equally. Web and hybrid applications have made allowances for these differences, and that’s where hybrid gets expensive.
Truth be told, this is what really happens:
- The developer begins implementing the first few features on one device and in one operating system version. Everything is great. (So far!)
- Next, the developer tests features on another OS. It’s not rendered in the same way. (Ughhh!) Some of the HTML, Javascript, and CSS features aren’t available on the new OS. Adaptation is necessary to support this new OS.
- The developer now has to work from within the first platform to figure out which feature is underperforming. More adjustments are needed to optimize the app’s responsiveness and performance.
- This series of events repeats itself as each new feature is developed. It’s a never-ending story.
In that case, if feature development costs are measured by the amount of time required to write them for one OS; the answer is yes. Less energy is required. However, based on my experience and in-depth data analysis, the adaptation and debugging cycle on all platforms costs just as much as writing it all out twice. It’s definitely something to consider.
Does native require developers to learn each platform?
Regardless of whether an application is native or hybrid, your development team should always learn each platform and understand each one’s subtleties. Forget SDK implications, browser specificity, or limitations. It’s about knowing how the platform behaves and how the users of a particular platform expect their application to behave in context. User experience should always be at the heart of your decision making. Would you ever ship an Android app with a faulty “back” button? Or, remove the swipe gesture on an iOS user’s app? No. That’s why it’s vital for you and your team to understand the platform and how its applications are expected to respond to real-life situations. The better you understand the core foundation, the better you can build and customize user-friendly applications that are intuitive and integrative.
Are hybrid applications a bad thing?
No. As I mentioned above, it’s more about the effort that you’re willing to invest and the user experience expectations that you’ve set. In fact, hybrid applications can be a great way to reuse existing web functionalities in an app. If an existing web application already offers a great mobile experience, you can easily and quickly integrate this functionality into your app so that your users can enjoy it right from their mobile phone screens.
Parting words: focus on your users, and the answer will come to you
Trust me. I know native application development can seem daunting at first. The mere thought of endless hybrid app debugging cycles is enough to make your head spin. But, nobody ever said mobile development was easy. However, we can all agree that mobile is at the crux of our socio-cultural reality and environment. It’s where our users interact and share on a daily basis. It’s where they experience and relive memorable moments. And, it’s where we need to shift our focus if we’re looking to create a positive change that leaves a lasting impact on the world we live in.
Next time you find yourself pondering on the “hybrid vs native” question, keep these questions top of mind to help guide your reflection:
- What kind of user experience am I going for?
- What kind of functionalities am I trying to build and how are they best delivered to the user?
- What expectations have I set for myself and how will I measure my application’s success?
Remember, good UX can only get you so far. It’s about bringing value to the user in a way that’s intuitive and simple. The Mirego team can help you with this, don’t hesitate to reach out to our team!