After a slow and pretty rough start with Apple’s Siri several years back, the idea of voice-controlled technology devices is beginning to take hold, thanks in large part to the success of Amazon’s Echo and its Alexa personal assistant. The proliferation of Alexa-enabled devices at the recent Consumer Electronics Show put an exclamation point on that development.

But as cool and interesting as accurate voice recognition and simple control may be, the real impact of voice-driven computing is significantly more profound. The success of Alexa and Echo is like an iceberg that initially appears to threaten the big ships at the surface of the tech industry but actually packs a significantly larger wallop for the more hidden aspects, including operating systems, chips, software, business models and much more.

First, as several people have pointed out, Alexa is very similar to an operating system or platform. But it’s one that’s independent of other popular platforms, and it creates a whole new battle for dominance. It’s actually one of the best examples of what I call a “meta-OS”, or a layer of software that sits at an abstracted level above traditional operating systems like Android, iOS or Windows, but still provides the ability for developers to create applications or services that work with it.

The beauty of a meta-OS is that it’s independent of the underlying OS, meaning that apps written for a meta-OS like Alexa don’t have to worry about being written to work with the underlying OS. That’s why, for example, you can see Alexa running not only on traditional computing devices, but cars, connected lamps and more. For those who remember Java virtual machines (JVMs), a meta-OS takes that concept to the next level.

Unlike JVMs, the type of software being created for Alexa is very different from traditional mobile apps, however, and that’s yet another of the profound changes that voice-controlled computing is starting to bring about. In the case of Alexa, these new software add-ons are called “Skills” but essentially, they’re simple, very focused extensions of the core capabilities that Alexa includes.

More importantly, Alexa is based on advanced technologies, including artificial intelligence and natural language processing. In addition, because there’s no screen to rely on, the whole idea of user interface and interaction models for these types of “invisible” apps are wholly different from what has been done with screen-based devices. Though I’m not a programmer, my understanding is that creating applications for this new software model is very different than coding common mobile apps.

The model for creating and distributing these Skills is also very different from what we’ve seen with big app stores. That, in turn, portends some significant changes in software and business models related to applications.

On top of all that, the type of processing needed to run Alexa and these Skills apps is also very different than traditional computing devices and that has profound implications for semiconductor chip makers. Though much of the work that Alexa currently does is via the cloud, future iterations will likely see more of the computing and intelligence happening at the endpoint device.

The type of “inferencing” work — that is, using an algorithm designed to look for patterns — that’s necessary to do AI-driven voice computing on a connected device has been found to work better on different types of chips than the typical CPUs we see in smartphones, PCs and tablets. That’s why, for example, at this year’s CES keynote, Nvidia was touting their work with Google on getting GPUs (graphics processing units, which are proving to be powerful tools for natural language processing and other AI-based applications) into future iterations of Google Home, their competitor to Amazon’s Echo.

Finally, speaking of competition, while much of the discussion on voice-based computing has focused on Amazon and Alexa, the truth is, many companies are working on their own voice-driven meta-OS offerings.

Amazon has a great head start, but in addition to Google and its voice-driven computing efforts on Google Home and its Google Assistant, Microsoft is working to expand Cortana, Apple is continually improving Siri, Samsung plans to enter the fray with their new Bixby assistant, and there’s many more coming from smaller companies.

In fact, not only are we at the dawn of a radical new type of invisible and contextual computing paradigm, we’re at the starting line of what promises to be a fascinating and industry-changing race for control in this new era.

Without a doubt, this long-term battle will be a fascinating one to watch.

USA TODAY columnist Bob O'Donnell is president and chief analyst of TECHnalysis Research, a market research and consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. His clients are major technology firms including Microsoft, HP, Dell, and Qualcomm. You can follow him on Twitter @bobodtech.