Craig Wright on Bitcoin Scalability
Craig Wright is the world’s foremost leading expert on cyber security. His work covers both public and private domains.
Taking a look at https://www.sans.org/cyber-guardian/cyber-guardians – you can see that Dr Wright is listed as both a red and blue team specialist. To clarify, Red team skills stand for attacking, that is to understand the processes used to break into systems. The blue skills, on the other hand, represent the defence expertise. Wright is on both sides of the fence. Fundamentally, it is of paramount importance in security systems to understand the attack, in order to best defend against it.
He has been trained more than anyone else has ever globally, with an incredible list of certifications.
Therefore, concerning Segwit, Dr Wright’s warnings on attack vectors come from a place of profound qualification on the subject.
“I am pro Bitcoin Unlimited. What we need to do is to scale on-chain and not allow SegWit” – states Wright.
Of course Bitcoin Unlimited comes from a place that correlates strongly with Bitcoin’s creator and inventor Satoshi Nakamoto’s scaling method.
It’s a bit of an irony to even state that increasing Bitcoin’s blocksize enables scalability… Bitcoin was already scalable, there never was a limit in the beginning. And today, there’d be no outrageous fees on the Bitcoin network if it wasn’t for an artificial, temporary parameter that Satoshi input into the code, as a means of early spam prevention.
Dr Wright explained that in the early days, it was very cheap to spam the network, and hence why a spam limit parameter was required. Now, this limit can be lifted.
However, the biggest argument used by small-blockers and the Core-aligned flock, is that large blocks will cause centralization. Indeed, this argument rests on the shoulders of a rather loose argument which heavily hijacks the term ‘node’. It then assumes an incredibly high reliance on ‘validating’ nodes.
The bitcoin code as per the github repository states the node purpose in a comment.
According to Satoshi’s whitepaper, nodes were miners. The hijacking of the word seems to have placed unneeded importance on these validating nodes. But what would Bitcoin look like without these ‘nodes’, and how centralized would mining be, with the removal of the blocksize limit?
Gregory Maxwell of Blockstream has in the past stated:
“With gigabyte blocks bitcoin would not be functionally decentralized in any meaningful way: only a small, self-selecting group of some thousands of major banks would have the means and the motive to participate in validation” – Gregory Maxwell
Craig Wright debunks the centralization myth with a very simple analysis:
“There are around 15,000 banks. Add financial organisations including savings and loans… We are up to 60,000. Then add in all the major merchants and operations that need to have transaction data by law, and that’s around 17 million organisations. That is decentralised do you not think?” – Dr Craig Wright
Adam Selene has written extensively on the hijacking of the term ‘node’. In short, any non-mining node is simply a wallet. It doesn’t help the system propagate, and it doesn’t create blocks.
This leads us to SPV wallets. SPV wallets or “Simple Payment Verification” clients are a technique which Satoshi had described in his paper, describing how a lightweight client can verify transactions without downloading the entire blockchain.
Craig Wright states the following:
“Now, the first thing we need to understand is that all encryption systems are probabilistic. Password systems and any modern information security system works on probabilistic information. The so-called experts who talk about the probabilistic system of bitcoin fail to comprehend that strong encryption is probabilistic.
Fraud proofs and nowhere near as difficult as anyone thinks. They do not require some special cryptographic protocol. They are far simpler to implement than anyone seems to understand.
The solution is incredibly simple. All you need to do is randomly select a series of nodes on the network and query whether the inclusion of your transaction has occurred on that node. Each query would be random. Using a simple Bayesian algorithm, we could use a failure model to analyse the likelihood of a double spend or other attack.” – Dr Craig Wright
Wright explains that each time we pick a node at random and request our transaction we can expect either of the following results:
- – We receive our transaction as we expected,
- – We receive an alternate transaction such as a double spend, or
- – We receive nothing.
Basically, checking eight nodes would give you 99.9999% assurance that your transaction will be included within a block in the next block mine (assuming that the cap has been removed and we don’t have all these delays). The limitation imposed by the cap it is severely diminishing the security of the network.
In under two seconds, 99.98% of the total hash power would have received your transaction. These figures are from the existing network. This means that without the cap, you can be assured of even zero confirmation transactions in a minimum amount of time. This wasn’t the case in 2010. This has nothing to do with the protocol. It is to do with the economics of the system. As miners become more commercial and professional, the overall security and efficiency of the network increases exponentially.
Craig Wright concludes “0-confirmation WAS secure before Core”.