When we consider the latest improvements in technology, logically speaking, it's easy to forget that a software company could spend years refining and improving a product only to discover that the business imperatives and technical paradigms have shifted leaving the solution irrelevant.

In the past, building the best technology was more often than not about making a product that could help customers achieve competitive advantage by optimizing their value chains and improving efficiency. But progress toward the next big thing has, over time, come to be measured more by chip performance and computing power. Still, digital technology by its nature makes it more likely that we will miss the next big thing--and the inherent challenges--if we rely on past paradigms to identify coming innovation.

We're steadily moving toward a technological perspective itself that gives us a view of coming paradigm shifts, or as the philosopher-writer Khalil Gibran points out, "progress lies not in enhancing what is, but advancing toward what will be." Assessing progress is an important part of understanding a product for what it is, rather than what it isn't, and is crucial to knowing its potential.

Arguably, this also applies to where we are today in the technological arena, where progress lies in the ever-advancing (and immutable) knowledge stored within blocks of information. The link, as I see it, lies with the current state of three converging technologies, namely blockchain, artificial intelligence and quantum computing. Certainly, you've heard a lot about the first two, but the third may surprise you so let me elaborate.

For some time now, I've been talking about how intelligent contracts will eventually emerge over smart contracts. One reason for that has been made very clear within the last few weeks, with the dramatic fall in the trading value of cryptocurrencies.

What has this got to do with smart contracts? Well, to execute a smart contract, which is just a set of instructions written in code that reside on a blockchain, you need systems that run that code and have reached consensus that the data is correct. In the world of blockchain, reaching that consensus consumes actual power (energy), and power is a cost. This is, in fact, what actually drives the cost of cryptocurrencies.

If we apply this thinking to the execution of a smart contract, we can see that the cost for the operation would differ dramatically depending on the time of execution. The graph below reveals that the time between coding, cost projection and execution mean that you can never accurately predict the cost of the actual execution of a contract.

For instance, at the at the most basic level, we can cite a hypothetical situation in which all of a business's contracts have been coded as 'smart', the costs associated with those contracts were projected one year ago, and some of the contracts were executed 20 days ago. This leaves a cost difference of more than $1,300. In reality, most organizations use fractions such as 1/64 or 1/32 to calculate this, in which case the cost of a contract is, at least in part, reliant upon the amount of time needed to execute it.

This might not seem like a big deal if you are looking at executing a single smart contract, but when you scale this up to millions of contracts, you can see the issue. Predictability of costs is just the start. Other issues are more technical in nature. At the highest level, this includes the ability of your system to reach an agreement on the state of the data itself--that is to say, your ability to reach consensus. This is particularly important in smart contracts, because a mistake is virtually impossible to rectify.

The idea of "undoing" an error in smart contracts as they currently stand, should also be considered. For example, how do you fix bugs? Anyone who works with software or uses it knows that bugs are virtually unavoidable, no matter how much coders try to avoid them.

With smart contracts, any mistakes or bugs take on even greater importance. All participating parties can see the code and can pick apart any bugs to exploit the data they include, yet the code is also locked so it cannot be fixed. This leads to situations like the DAO and, more recently Shadowforks, in which a poorly coded and misunderstood smart contract permanently froze $1 million of Ethereum assets.To put that into context, it's like a business suddenly having all of its inbound payments stuck in transit forever.

The issue of 'performance against cost' has recently been exposed with the code for CryptoKitties, a basic program that allowed users to create cats. That's all well and good, but the frenzy for creating coded cats consumed around 15 percent of available network capacity and resulted in cats selling for more than $100,000 each. This emphasizes my initial point, which is that you cannot guarantee any costs for your contracts.

In addition to the actual code and cost, current smart contracts are purely logic engines--Turing complete and computationally universal no less, but still only logic. They do not allow for more advanced use of technology to account for real contractual issues, such as ambiguous language or linked dependencies and decisions.

People counter that with the argument that smart contracts are supposed to remove the ambiguity from contracts, yet for all but the simplest contracts, this is impossible. As a colleague recently pointed out to me, a smart contract for payment of travel delays may sound appealing, but it's actually fraught with danger.

In my view, this is why eventually 'smart contracts' will turn into 'intelligent contracts' where 'intelligent' means, not just the AI being used within the code, but also the 'intelligence' in how we create, manage and run the systems.

That is already happening with the newly released Sawtooth framework on hyperledger. The Sawtooth framework moves away from having the code fixed and immutable on the chain, allowing bugs to be recognized and fixed, and thus avoiding the dollars being locked on the chain as well.

Sawtooth also allows contracts to be written and executed within any code. That means developers can start to include AI models within their processes and systems, including more intelligence within the processes but in a safer and more secure way based on execution of code and distributed processing.

In addition to this intelligent way to run code, the performance limitations of the current smart contracts infrastructure have also been virtually removed. To do this two things were changed, however the most important element is that of consensus. Consensus, as I previously mentioned, allows the Blockchain to know what is written is correct based on the fact that the majority of the nodes agree. Many methods for this are available, but most systems currently use proof-of-work (PoW) or proof-of-stake algorithms to achieve distributed consensus, each of which requires enormous processing power, and thus cost, to generate.

In the case of PoW, its actual cost to the planet gets progressively worse the more nodes that take part. This is one of the reasons why cryptocurrencies (and why smart contracts) cost to execute. While they are also supposed to ensure that the Blockchain is a fully trustable and unhackable source of the truth, the fact is that blockchain can actually be hacked, although it's very, very hard to do it with today's conventionally available computing power.

This brings me to the third part of the paradigm shift I listed earlier, which is quantum computing. With quantum computing, a node could recompute all the blocks on the chain, and write the data to a new block, before any current day system could compute the new hash. This effectively means that proof of work is no guarantee of data integrity.

However, we do in fact have a more intelligent way to deal with both situations, the power and cost of today and the possibility of hacking, called proof of elapsed time (PoET). While PoET will not totally remove the issue when faced with quantum computing, it does allow for a more rounded approach that provides for better control. It also reduces the power consumption to a single node and the associated costs do not increase when more nodes are added to the network.

Now that's an "intelligent" contract!

____________

Kevin Gidney is co-founder of Seal Software. Kevin has held various senior technical positions within Legato, EMC, Kazeon, Iptor and Open Text. His roles have included management, solutions architecture and technical pre-sales, with a background in electronics and computer engineering, applied to both software and hardware solutions.