The singularity concept isn't a simple one. It has attached to it not only the idea of Artificial Intelligence that is capable of constant self-improvement, but also that the invention and deployment of this kind of AI will trigger ever accelerating technological growth - so much so that humanity will see itself changed forever. Now, really, there are some pieces of technology already that have effectively changed the fabric of society. We've seen this happen with the Internet, bridging gaps in time and space and ushering humanity towards frankly inspiring times of growth and development. Even smartphones, due to their adoption rates and capabilities, have seen the metamorphosis of human interaction and ways to connect with each other, even sparking some smartphone-related psychological conditions. But all of those will definitely, definitely, pale in comparison to what changes might ensue following the singularity.

The thing is, up to now, we've been shackled in our (still tremendous growth) by our own capabilities as a species: our world is built on layers upon layers of brilliant minds that have developed the framework of technologies our society is now interspersed with. But this means that as fast as development has been, it has still been somewhat slowed down by humanity's ability to evolve, and to develop. Each development has come with almost perfect understanding of what came before it: it's a cohesive whole, with each step being provable and verifiable through the scientific method, a veritable "standing atop the shoulders of giants". What happens, then, when we lose sight of the thought process behind developments: when the train of thought behind them is so exquisite that we can't really understand it? When we deploy technologies and programs that we don't really understand? Enter the singularity, an event to which we've stopped walking towards: it's more of a hurdle race now, and perhaps more worryingly, it's no longer fully controlled by humans.

Google has been one of the companies at the forefront of AI development and research, much to the chagrin of AI-realists Elon Musk and Stephen Hawking, who have been extremely vocal on the dangers they believe that unchecked development in this field could bring to humanity. One of Google's star AI projects is AutoML, announced by the company in May 2017. It's purpose: to develop other, smaller-scale, "child" AIs which are dependent on AutoML's role of controller neural network. And that it did: in smaller deployments (like CIFAR-10 and Penn Treebank), Google engineers found that AutoML could develop AIs that performed on par with custom designs by AI development experts. The next step was to look at how AutoML's designs would deliver in greater datasets. For that purpose, Google told AutoML to develop an AI specifically geared for image recognition of objects - people, cars, traffic lights, kites, backpacks - in live video. This AutoML brainchild was named by google engineers NASNet, and brought about better results than other human-engineered image recognition software.

According to the researchers, NASNet was 82.7% accurate at predicting images - 1.2% better than any previously published results based on human-developed systems. NASNet is also 4% more efficient than the best published results, with a 43.1% mean Average Precision (mAP). Additionally, a less computationally demanding version of NASNet outperformed the best similarly-sized models for mobile platforms by 3.1%. This means that an AI-designed system is actually better than any other human-developed one. Now, luckily, AutoML isn't self-aware. But this particular AI has been increasingly put to work in improving its own code.

AIs are some of the most interesting developments in recent years, and have been the actors of countless stories of humanity being removed from the role of creators to that of mere resources. While doomsday scenarios may be too far removed from the realm of possibility as of yet, they tend to increase in probability the more effort is put towards the development of AIs. There are some research groups that are focused on the ethical boundaries of developed AIs, such as Google's own DeepMind, and the Future of Life Institute, which counts with Stephen Hawking, Elon Musk, and Nick Bostrom on its scientific advisory board, among other high-profile players in the AI space. The "Partnership on AI to benefit people and society" is another one of these groups worth mentioning, as is the Institute of Electrical and Electronics Engineers (IEEE) which has already proposed a set of ethical rules to be followed by AI.

Having these monumental developments occurring so fast in the AI field is certainly inspiring as it comes to humanity's ingenuity; however, there must exist some security measures around this. For one, I myself ponder on how fast these AI-fueled developments can go, and should go, in the face of human scientists finding in increasingly difficult to keep up with these developments, and what they entail. What happens when human engineers see that AI-developed code is better than theirs, but they don't really understand it? Should it be deployed? What happens after it's been integrated with our systems? It would certainly be hard for human scientists to revert some changes, and fix some problems, in lines of code they didn't fully understand in the first place, wouldn't it?

And what to say regarding an acceleration of progress fueled by AIs - so fast and great that the changes it brings about in humanity are too fast for us to be able to adapt to them? What happens when the fabric of society is so plied with changes and developments that we can't really internalize these, and adapt to how society should work? There have to be ethical and deployment boundaries, and progress will have to be kept in check - progress on progress's behalf would simply be self-destructive if the slower part of society - humans themselves - don't know how, and aren't given time to, adapt. Even for most of us enthusiasts, how our CPUs and graphics cards work are just vague ideas and incomplete schematics in our minds already. What to say of systems and designs that were thought and designed by machines and bits of code - would we really understand them? I'd like to cite Arthur C. Clarke's third law here: "Any sufficiently advanced technology is indistinguishable from magic." Aren't AI-created AIs already blurring that line, and can we trust ourselves to understand everything that entails?

This article isn't meant to be a doomsday-scenario planner, or even a negative piece on AI. These are some of the most interesting times - and developments - that most of us have seen, with steps taken here having the chance of being some of the most far-reaching ones in our history - and future - as a species. The way from Homo Sapiens to Homo Deus is ripe with dangers, though; debate and conscious thought of what these scenarios might entail can only better prepare us for what developments may occur. Follow the source links for various articles and takes on this issue - it really is a world out there.