Current State: data and computing silos

View and download the original Whitepaper. For more up-to-date developments you can view our Info Hub or Medium page

The decentralized NuNet network is designed to operate effectively within a technological ecosystem currently working according to quite different organizational principles. The current global computing ecosystem and market is largely oligopolistic and vertically integrated, and is mainly dominated by large ‘cloud’ infrastructure providers, such as Amazon WS, Google Compute Engine, MS Azure, and software-as-a-service providers, such as IBM, Oracle, Salesforce, SAP, and others. Most of these providers offer powerful computing platforms within which they provide tightly integrated ecosystems of paid data storage, data processing, and machine learning and AI algorithms. Consumers of these cloud computing infrastructures often use more than one provider, integrating these infrastructures with their own in-house infrastructures, resulting in multi-cloud and hybrid-cloud infrastructures, thus pushing providers to develop appropriate solutions towards such ends. However, cloud computing providers that offer tools enabling design and efficient operation of computing workflows, must use their own proprietary solutions and components, which often duplicate those of their competitors.

The size of the Infrastructure as a Service (IaaS) market worldwide accounts for about 22% of the whole public cloud market and is the fastest growing component of it, amounting to 36 billion USD revenues in 2018. It is estimated to grow 23% yearly and reach 59 billion USD by 2023.

Cloud computing ecosystems are therefore to a large extent isolated. For example, processes and computing pipelines implemented on Google Compute Engine cannot at any deeper level integrate with computing pipelines implemented on Amazon WS. Historically, this was justifiable by the fact that the physical concentration of computational resources provided better speed and efficiency, mostly due to fast communication within data centers. This state of affairs, however, is becoming obsolete and hinders the computing market potential and further development. Most importantly, huge amounts of unused computing power and data are scattered hidden in private computers, mobile phones, wearables and other private devices. The data produced by private devices, while legally owned by the device owners, is in most cases controlled and accessible by vendors and cloud providers. The raw data accumulated in IoT arrays is locked and controlled by device manufacturers and their proprietary cloud infrastructures. Again, this is dubiously justified by the requirements of security and privacy which are currently addressed by creating sealed and centrally managed data silos within each vendor’s boundaries.

This creates a situation where the already radically decentralized physical infrastructures are managed in a centralized fashion which, as recent examples show, becomes sub-optimal even with respect to security and privacy considerations that justified the closed centralized infrastructures in the first place. It becomes sensible, if not critical, that future computational architectures could and should be able to take advantage of such latent or siloed resources of both computing power and data.

“Cloud wars” notwithstanding, the global computing landscape is getting disrupted by new technologies of the emerging data economy. Edge and fog computing are beginning to distribute computing power across broad geographical networks of devices. They are being enabled by a variety of new technologies including ultra-fast broadband, wireless and mobile internet connections, a steadily increasing mass of mobile devices with significant storage and processing capacity, and advanced autonomous robots. Distributed computing technologies allow for stream computing, microservice architectures and Internet of Things ecosystems that can logically manage and execute workflows across different machines and geographical locations. Advances in artificial intelligence and machine learning technologies have allowed algorithms to perform efficient data transformations autonomously, or with minimal human intervention. Lastly, distributed ledger, and related technologies featuring cryptographically secure identification, automated trustless interactions and smart contracting as well as reputation management and more, enable incredibly fast and efficient micropayment exchanges among individual processes and microservices, again with little to no human intervention.

Given these recent developments, still rapidly unfolding, all the major building blocks needed for a globally decentralized computing and data economy are already in place today. And yet, the computing platforms of centralized cloud providers are still largely constrained by closed networks, proprietary payment systems and hard-coded provisioning operations.

These seemingly highly technical points have importance for humanity and its future that should not be underestimated. The computational universe is becoming an increasingly important part of our life in the physical universe, and has already surpassed the imagination of science-fiction writers of only a few decades ago. But despite these incredible advances, this is barely the beginning of the computational revolution. If we think in Kurzweilian terms regarding a Technological Singularity potentially occurring toward the middle of this century, we can say that the majority of the specific technologies that will underlie this Singularity have yet to be created and implemented. The principles according to which we build, operate, use and share computational resources in our physical and computational universes, will greatly influence our ability to tap into human creativity and shape the future of our world, in these critical next few decades as AI systems and other computational networks come to more and more greatly exceed human capabilities in various regards.

Last updated