Native Apps Are Killing the Hardware Revolution

The mobile computing revolution is definitively in full-swing: smartphone sales surpassed desktop sales in Q4 2010, and in the developing world, your computer is more likely something that you hold in your hand than on your lap.

The increase in mobile platforms isn’t limited to phones however: the Pebble smartwatch seems to be a trend-setter in a category which may soon be entered by Apple, and Google’s Glass, whether it succeeds in its current form or not, is a sign of a future of wearable computing. “Smart” devices are popping up everywhere, from the refrigerator, the thermostat, the TV, to even the deadbolt.

It doesn’t seem to be a stretch that the future of the computer lies everywhere around us, not just in a few specific devices.

But, as has always been the case with computing platforms, the success of a platform relies largely on the applications developed for it, and in many cases depends on a single “killer app.” The platform enables new functionality, it rarely provides that functionality itself. Apple’s advertising recognizes the importance of having “an app for that,” and Blackberry’s developer bounty puts a price on the importance to the ecosystem of having applications.

The speed and cost at which new applications can be deployed on these new computing devices dictates the rate of innovation for the hardware revolution, and the return to native apps is slowing that rate considerably.

With performant web apps, we were finally given a way to develop and deploy applications that were truly cross-platform [1]. In addition to being cross-platform, web apps are managed almost entirely by the vendor - shortening cycle times for development, and lowering costs for the customer. The SaaS revolution speaks to the strength of the web app model for both software vendors and consumers.

But.

The huge gold rush that came with the exploding popularity of smartphones led to a dramatic shift in developer attention away from web apps, and towards native mobile apps. Native apps are now top of mind for anyone thinking of starting a tech startup, and with every new platform that is released, there is a new set of developers champing at the bit for a chance to develop for it (see: Glassware).

As a result of this return to native applications, if you have a great idea for an application, you once again have to develop it for every platform individually. This type of platform fragmentation is an absolute nightmare for an undercapitalized technology company. Developing for the web is cheaper than ever, but the proliferation of new devices with new platforms promises to eliminate the gains the technology community has made in the accessibility of starting a company, and thus, the number of successful companies that are started.

The devices where this is most striking is in “Smart TV’s”. In the past 3 weeks I’ve hooked up two different TV’s to our local network to take advantage of streaming Netflix, Hulu, and the like. Instead of just having a stripped down Linux distro or Android fork with a web browser inside, they each had a different set of applications to hook up with popular streaming services. Each TV had their own Netflix app [2], Hulu App, Amazon Instant Video App, Youtube App, etc. Regardless of who developed these apps, be it Youtube or Sony, this sort of platform-specific application development absolutely kills innovation. How can we expect to get new and innovative ways to use our TV’s if every new entrant has to develop for so many different platforms, if they’re given access at all?

The web browser is magical - it gives every computing device access to every web app in the world. By not including a web browser, the simplest of features for an internet-connected device, these Smart TV’s are closing us off from the innovation we’ve become accustomed to.

As the hardware revolution brings computing to an increasing number of devices, we need to fight for an open web on those devices to keep software innovation moving forward.

[1] - Yes, there are differences in web browsers that have to be accounted for, and take valuable development time, but those are getting better with stronger standards, and they are nothing compared to developing for say, Windows and OSX and Linux.

[2] - For what it’s worth, both TV-specific apps were worse than Netflix’s web app

Authoritative PGP Key Servers

With the PRISM scandal in the news, my attention was brought back to an issue that was nagging me a few months ago: even with fast computers in the palm of everyone’s hand and near ubiquitous networking, we don’t use strong encryption in our everyday lives.

For me personally, and I suspect many other Americans, the things I care most about are in my Gmail and Dropbox accounts. For the most part, we trust Google and Dropbox (and others) to keep that data safe and private. PRISM invalidates that trust. It’s unclear the extent to which these organizations participated in PRISM, or even what “participation” means, but it is clear that with FISA, even if they were handing over your data, they wouldn’t be able to tell you about it.

Even without Google’s compliance, if the NSA (or any other party) controls one the email relays between the sender and the recipient, it can read the contents of an email message, as SMTP lacks end-to-end encryption.

That revelation turned my beef with the lack of everyday encryption from a minor annoyance into a serious pain point.

We have an open source, near-military-grade encryption standard in OpenPGP available, but it seems that only the most paranoid and tech-savvy take advantage of it. Why isn’t PGP more widespread?

The Web of Trust can’t grow effectively. In order for Public Key Cryptography to work, you have to trust that a user’s public key is truly the public key of that user, and not someone impersonating that user (or, less maliciously an old public key). PGP solves this problem in a decentralized manner, having users sign each other’s identity certificates to certify them as trusted. If several people you trust have signed another user’s certificate, you can be pretty confident that the identity is accurate. This works great if everyone uses PGP, but at anything but near complete adoption  it runs into issues. New users (with no one to sign their certificates) can’t be trusted at all, and two users without a common connection have no way to secure transmissions.

Too much human intervention. There are some good plugins to make email encryption fairly painless, but it requires that the end user find and maintain a set of public keys for their contacts. There are public key servers to help with the distribution of keys, but they are intended for human users to find the proper key, and encrypt their message with it. This is far too much human intervention, especially for a cryptography solution, to ever become a mainstream activity.

With these issues in mind, I came up with the concept of Authoritative Key Servers: PGP Key Servers designated as the holder of up-to-date and accurate public keys for the users of a domain.

I describe HTTP Authoritative Keyserver Protocol in more detail here, but it essentially allows you to designate a keyserver as authoritative for a particular domain in the DNS records. The designated server will then respond to requests for a particular user’s public key block, and the response can be trusted as up-to-date and accurate.

As a result, if a sender and a recipient have no prior communication and no common links, the sender can trust the DNS records of the recipient’s domain in order to send secure communications, and with a simple plugin, can do so automatically, without user intervention.

This allows adoption of a strong encryption standard to happen at a natural pace - I can use this scheme today, and anyone can send me secure communications [1], but I can still communicate with those that have yet to adopt it.

I created an open source implementation of my Authoritative Keyserver, and anyone can set up their own key server. Alternatively, if you don’t want to host your own keyserver, you can designate my key server (keys.tgriff3.com) as authoritative and send me your public key block.

For email encryption, the creation of a Gmail plugin that automatically looks for an Authoritative Keyserver and encrypts the message contents is the next step.

I’m by no means an crypto expert, so this protocol might be vulnerable to serious security issues that I’m not aware of — please let me know if that’s the case. I know that DNS isn’t the most trustworthy system, which is the achilles heel of this protocol. But it has to be better than the status quo, communication with no encryption.

[1] try it: my email is trey @ this domain, and the authoritative keyserver for this domain is keys.tgriff3.com, meaning you can access my public key block at http://keys.tgriff3.com/users/tgriff3.com/trey . But don’t take my word for it: check my DNS records!

Joules: Node.js Module loading in the browser

I recently set up a dedicated domain for Joules, the library I created for Node.js-style module loading in the browser. The idea behind the library was to create something as flexible as RequireJS - which allows for very rapid development by skipping a build step, and still allows you to compile scripts into a compressed version for production - that supports Node.js modules as well as Browserify.

Joules accomplishes both of these, resulting in a Node.js-like environment for the front-end that still lets you just refresh the page to see your changes, making development faster.

Take a look at the demo to see what I mean.

Making front-end Javascript more modular is a real problem. There have been some good attempts to make it much more modular; notably Component has taken a holistic approach to modularizing the front-end, including other assets in “components”.

Joules doesn’t attempt to do anything that ambitious - it’s a module loader, and just a module loader. It is intended to perform exactly like Node’s module loader, even where it diverges from the CommonJS spec. It’s unopinionated about how you build your project, or what libraries you use.

My hope is that by having a dependable front-end module loader that takes advantage of the existing eco-system for javascript modules (NPM, and the many thousands of packages on it) developers will be able to create clean structures whatever way is best for their project.

It’s still a very young project: I haven’t written test scripts that ensure it’s adherence to Node’s module loading style, so there are almost definitely bugs that will continue to come up. That’s the biggest hurdle to making it a stable, usable library. It also lacks the core modules that are in most every Node project. While some of these are unique to a server environment, Browserify has gone a long way toward making many of these modules available in the browser. I’d like to take advantage of that work to bring core modules to the browser using Joules.

I’ll continue to develop Joules into a more mature, stable project that can be used in production. I hope that it helps some developers who were facing similar issues that I was when looking at the options available for writing modular, reusable, front-end javascript.