Cryptosystems and their Components, Design Objectives and ...
Cryptosystems and their Components, Design Objectives and ...
Quantum computing and Bitcoin - Bitcoin Wiki
Crypto Attacks and What They Mean for Open-Source Value ...
Crypto Malware Attacks on Alarming Rise – Even Apple ...
Hackers Donate Bitcoin From Ransomware Attacks to ...
ABCMint is a quantum resistant cryptocurrency with the Rainbow Multivariable Polynomial Signature Scheme.
Good day, the price is going up to 0.3USDT. ABCMint Second Foundation ABCMint has been a first third-party organization that focuses on post-quantum cryptography research and technology and aims to help improve the ecology of ABCMint technology since 2018. https://abcmintsf.com https://abcmintsf.com/exchange What is ABCMint? ABCMint is a quantum resistant cryptocurrency with the Rainbow Multivariable Polynomial Signature Scheme. Cryptocurrencies and blockchain technology have attracted a significant amount of attention since 2009. While some cryptocurrencies, including Bitcoin, are used extensively in the world, these cryptocurrencies will eventually become obsolete and be replaced when the quantum computers avail. For instance, Bitcoin uses the elliptic curved signature (ECDSA). If a bitcoin user?s public key is exposed to the public chain, the quantum computers will be able to quickly reverse-engineer the private key in a short period of time. It means that should an attacker decide to use a quantum computer to decrypt ECDSA, he/she will be able to use the bitcoin in the wallet. The ABCMint Foundation has improved the structure of the special coin core to resist quantum computers, using the Rainbow Multivariable Polynomial Signature Scheme, which is quantum resisitant, as the core. This is a fundamental solution to the major threat to digital money posed by future quantum computers. In addition, the ABCMint Foundation has implemented a new form of proof of arithmetic (mining) "ABCardO" which is different from Bitcoin?s arbitrary mining. This algorithm is believed to be beneficial to the development of the mathematical field of multivariate. Rainbow Signature - the quantum resistant signature based on Multivariable Polynomial Signature Scheme Unbalanced Oil and Vinegar (UOV) is a multi-disciplinary team of experts in the field of oil and vinegar. One of the oldest and most well researched signature schemes in the field of variable cryptography. It was designed by J. Patarin in 1997 and has withstood more than two decades of cryptanalysis. The UOV scheme is a very simple, smalls and fast signature. However, the main drawback of UOV is the large public key, which will not be conducive to the development of block practice technology.
The rainbow signature is an improvement on the oil and vinegar signature which increased the efficiency of unbalanced oil and vinegar. The basic concept is a multi-layered structure and generalization of oil and vinegar. PQC - Post Quantum Cryptography The public key cryptosystem was a breakthrough in modern cryptography in the late 1970s. It has become an increasingly important part of our cryptography communications network over The Internet and other communication systems rely heavily on the Diffie-Hellman key exchange, RSA encryption, and the use of the DSA, ECDSA or related algorithms for numerical signatures. The security of these cryptosystems depends on the difficulty level of number theory problems such as integer decomposition and discrete logarithm problems. In 1994, Peter Shor demonstrated that quantum computers can solve all these problems in polynomial time, which made this security issue related to the cryptosystems theory irrelevant. This development is known as the "post-quantum cryptography" (PQC) In August 2015, the U.S. National Security Agency (NSA) released an announcement regarding its plans to transition to quantum-resistant algorithms. In December 2016, the National Institute of Standards and Technology (NIST) announced a call for proposals for quantum-resistant algorithms. The deadline was November 30, 2017, which also included the rainbow signatures used for ABCMint.
Technical: Upcoming Improvements to Lightning Network
Price? Who gives a shit about price when Lightning Network development is a lot more interesting????? One thing about LN is that because there's no need for consensus before implementing things, figuring out the status of things is quite a bit more difficult than on Bitcoin. In one hand it lets larger groups of people work on improving LN faster without having to coordinate so much. On the other hand it leads to some fragmentation of the LN space, with compatibility problems occasionally coming up. The below is just a smattering sample of LN stuff I personally find interesting. There's a bunch of other stuff, like splice and dual-funding, that I won't cover --- post is long enough as-is, and besides, some of the below aren't as well-known. Anyway.....
Yeah the exciting new Lightning Network channel update protocol!
Solves "toxic waste" problem. In the current Poon-Dryja update protocol, old state ("waste") is dangerous ("toxic") because if your old state is acquired by your most hated enemy, they can use that old state to publish a stale unilateral close transaction, which your counterparty must treat as a theft attempt and punish you, causing you to lose funds. With Decker-Russell-Osuntokun old state is not revoked, but is instead gainsaid by later state: instead of actively punishing old state, it simply replaces the old state with a later state.
Allows multiple participants in the update protocol. This can be used as the update protocol for a channel factory with 3 or more participants, for example (channels are not practical for multiple participants since the loss of any one participants makes the channel completely unuseable; it's more sensible to have a multiple-participant factory that splits up into 2-participant channels). Poon-Dryja only supports two participants. Another update protocol, Decker-Wattenhofer, also supports multiple participants, but requires much larger locktimes in case of a unilateral close (measurable in weeks, whereas Poon-Dryja and Decker-Russell-Osuntokun can be measured in hours or days).
It uses nLockTime in a very clever way.
No, it does not solve the "watchtower needed" problem. Decker-Russell-Osuntokun still requires watchtowers if you're planning to be offline for a long time.
What might be confused is that it was initially thought that watchtowers under Decker-Russell-Osuntokun could be made more efficient by having the channel participant update a single "slot" in the watchtower, rather than having to consume one "slot" per update in Poon-Dryja. However, the existence of the "poisoned blob" attack by ZmnSCPxj means that having a replaceable "slot" is risky if the other participant of the channel can spoof you. And the safest way to prevent spoofing somebody is to identify that somebody --- but now that means the watchtower can surveill the activities of somebody it has identified, losing privacy.
Requires base layer change --- SIGHASH_NOINPUT / SIGHASH_ANYPREVOUT. This is still being worked out and may potentially not reach Bitcoin anytime soon.
Determining costs of routes is somewhat harder, and may complicate routefinding algorithms. In particular: every channel today has a "CLTV Delta", a number of blocks by which the total maximum delay of the payment is increased. This maximum delay is the maximum amount of time by which an outgoing payment can be locked, and needs to be reduced for UX purposes. Decker-Russell-Osuntokun will also add a "CSV minimum", a number of blocks, which must be smaller than the delay of an HTLC going through the channel. Current routefinding algos are good at minimizing a summed-up cost (like the "CLTV Delta") so the "CSV minimum" may require discovering / developing new routefinding algos.
Due to the "CSV minimum" above, existing nodes that don't understand Decker-Russell-Osuntokun cannot reliably route over Decker-Russell-Osuntokun channels, as they might not impose this minimum properly.
Multipart payments / AMP
Splitting up large payments into smaller parts!
There are at least three variants of multipart payments: Original, Base, and High.
Original is the original AMP proposed by Lightning Labs. It sacrifices proof-of-payment in order to allow each path to have a different payment hash. This is done by having the payer use a derivation scheme to generate each part's payment preimage from a seed, then having the split the seed (using secret sharing) to each part. The receiver can only reconstruct the seed if all parts reach it.
Base simply uses the same payment hash for all routes. This retains proof-of-payment (i.e. an invoice is undeniably signed by the receiver, including a payment hash in the invoice; public knowledge of the payment preimage is proof that the receiver has in fact received money, and any third party can be convinced of this by being shown the signed invoice and the preimage). The receiver could just take one part of the payment and then claim to be underpaid by the payer and then deny service, but claiming any one part is enough to publish the payment preimage, creating a proof-of-payment: so the receiver can provably be made liable, even if it took just one part, thus the incentive of the receiver is to only take in the payment once all parts have arrived to it.
High requires elliptic curve points / scalars. It combines both Original and Base, retaining proof-of-payment (sacrificed by Original) and ensuring cryptographically-secure waiting for all parts (rather than the mere economically-incentivized of Base). This is done by using elliptic curve homomorphism to addition of scalars to add together the payer-provided preimage (really scalar) of Original with the payee-provided preimage (really scalar) of Base.
Better expected reliability. Channels are limited by capacity. By splitting up into many smaller payments, you can fit into more channels and be more likely to successfully reach the payee.
Capacity on mutiple of your channels can be used to pay. Currently if you have 0.05BTC on one channel and 0.05BTC on another channel, you can't pay 0.06BTC without first rebalancing your channels (and paying fees for the rebalance first, whether the payment succeeds or not). With multipart you can now combine the capacities of multiple of your channels, and only pay fees for combining them if the payment pushes through.
Wumbo payments (oversized payments) come "for free" without having to be explicitly supported by the nodes of the network: you just split up wumbo payments into parts smaller than the wumbo limit.
Multipart will have higher fees. Part of the feerate of each channel is a flat-rate fee. Going through multiple paths means paying more of this flat-rate fee.
It's not clear how to split up payments. Heuristics for payment splitting have to be derived and developed and tested.
Payment points / scalars
Using the magic of elliptic curve homomorphism for fun and Lightning Network profits! Basically, currently on Lightning an invoice has a payment hash, and the receiver reveals a payment preimage which, when inputted to SHA256, returns the given payment hash. Instead of using payment hashes and preimages, just replace them with payment points and scalars. An invoice will now contain a payment point, and the receiver reveals a payment scalar (private key) which, when multiplied with the standard generator point G on secp256k1, returns the given payment point. This is basically Scriptless Script usage on Lightning, instead of HTLCs we have Scriptless Script Pointlocked Timelocked Contracts (PTLCs).
Enables a shit-ton of improvements: payment decorrelation, stuckless payments, noncustodial escrow over Lightning (the Hodl Hodl Lightning escrow is custodial, read the fine print), High multipart.
It's the same coolness that makes Schnorr Signatures cool. ECDSA, despite being based on elliptic curves, is not cool because the hash-the-nonce operation needed to prevent it from infringing Schnorr's fatherfucking patent also prevents ECDSA from using the cool elliptic curve homomorphism of addition over scalars.
Requires Schnorr on Bitcoin layer.
Actually, we can work with 2p-ECDSA without waiting for Schnorr. We get back the nice elliptic curve homomorphism by passing the ECDSA nonce through another cryptosystem, Paillier. This gets us the ability to do Scriptless Script. I think it has only 80-bits security because of going through Paillier though.
Basically the conundrum is: we could implement 2p-ECDSA now, hope we never have to test the 80-bit security anytime soon, then switch to Schnorr with 128-bit security later (which means reimplementing a bunch of things, because the calculations are different and the data that needs to be exchanged between channel participants is very different between the 2p-ECDSA and Schnorr). Reimplementing is painful and is more dev work. If we don't implement with 2p-ECDSA now, though, we will be delaying all the nice elliptic curve goodness (stuckless, noncustodial escrow, payment decorrelation) until Bitcoin gets Schnorr.
Elliptic curve discrete log problem is theoretically quantum-vulnerable. If we can't find a qunatum-resistant homomorphic construction, we'll have to give up the advantages (payment decorrelation, stuckless payments, noncustodial escrow over Lightning) we got from using elliptic curve points and go back to boring old hashes.
Ensuring that payers cannot access data or other digital goods without proof of having paid the provider. In a nutshell: the payment preimage used as a proof-of-payment is the decryption key of the data. The provider gives the encrypted data, and issues an invoice. The buyer of the data then has to pay over Lightning in order to learn the decryption key, with the decryption key being the payment preimage.
Enables data providers to sell data. This could be sensors, livestreams, blogs, articles, whatever.
There's no scheme to determine if the data provider is providing actually-useful data. The data-provider could just stream https://random.org for example. This is a potentially-impossible problem. Even if the data-provider provides a "sample" of the data, and is able to derive some proof that the sample is indeed a true snippet of the encrypted data, the rest of the data outside of the sample might just be random junk.
No more payments getting stuck somewhere in the Lightning network without knowing whether the payee will ever get paid! (that's actually a bit overmuch claim, payments still can get stuck, but what "stuckless" really enables is that we can now safely run another parallel payment attempt until any one of the payment attempts get through). Basically, by using the ability to add points together, the payer can enforce that the payee can only claim the funds if it knows two pieces of information:
The payment scalar corresponding to the payment point in the invoice signed by the payee.
An "acknowledgment" scalar provided by the payer to the payee via another communication path.
This allows the payer to make multiple payment attempts in parallel, unlike the current situation where we must wait for an attempt to fail before trying another route. The payer only needs to ensure it generates different acknowledgment scalars for each payment attempt. Then, if at least one of the payment attempts reaches the payee, the payee can then acquire the acknowledgment scalar from the payer. Then the payee can acquire the payment. If the payee attempts to acquire multiple acknowledgment scalars for the same payment, the payer just gives out one and then tells the payee "LOL don't try to scam me", so the payee can only acquire a single acknowledgment scalar, meaning it can only claim a payment once; it can't claim multiple parallel payments.
Can safely run multiple parallel payment attempts as long as you have the funds to do so.
Needs payment point + scalar
Non-custodial escrow over Lightning
The "acknowledgment" scalar used in stuckless can be reused here. The acknowledgment scalar is derived as an ECDH shared secret between the payer and the escrow service. On arrival of payment to the payee, the payee queries the escrow to determine if the acknowledgment point is from a scalar that the escrow can derive using ECDH with the payer, plus a hash of the contract terms of the trade (for example, to transfer some goods in exchange for Lightning payment). Once the payee gets confirmation from the escrow that the acknowledgment scalar is known by the escrow, the payee performs the trade, then asks the payer to provide the acknowledgment scalar once the trade completes. If the payer refuses to give the acknowledgment scalar even though the payee has given over the goods to be traded, then the payee contacts the escrow again, reveals the contract terms text, and requests to be paid. If the escrow finds in favor of the payee (i.e. it determines the goods have arrived at the payer as per the contract text) then it gives the acknowledgment scalar to the payee.
True non-custodial escrow: the escrow service never holds any funds.
Needs payment point + scalar.
Because elliptic curve points can be added (unlike hashes), for every forwarding node, we an add a "blinding" point / scalar. This prevents multiple forwarding nodes from discovering that they have been on the same payment route. This is unlike the current payment hash + preimage, where the same hash is used along the route. In fact, the acknowledgment scalar we use in stuckless and escrow can simply be the sum of each blinding scalar used at each forwarding node.
Privacy! Multiple forwarding nodes cannot coordinate to try to uncover the payer and payee of each payment.
The cryptocurrency community has long been discussing one technical feature of the blockchain, which directly affects its future. We are talking about the threat to the blockchain from the so-called quantum computing. The fact is that if these threats are implemented, crypto assets will not be able to function technically and all problems with their regulation will disappear by themselves. Indeed, what is the point of creating a serious regulatory system for an instrument that will soon become simply inoperable? Most modern cryptocurrencies are built on a particular cryptographic algorithm that ensures its security. The level of protection is determined by the amount of work required by the key, the password that determines the final result of the cryptographic conversion. It is known that when solving cryptography problems, the classical computer performs total testing of possible keys, in turn, one after another. A quantum computer can instantly test a set of keys and establish a combination that has the maximum probability of being true and thereby compromise the cryptosystem. The threat to bitcoin is that high-speed quantum computers, as a result, will be able to “create problems” to the encryption processes and digital signatures used in the technology of blockchain and virtual currencies. Ultra-fast calculations would in principle allow to forge smart contracts and steal “coins”. Most cryptocurrencies use public-key encryption algorithms for communications and, in particular, digital signatures. Public key cryptography is based on one-way mathematical functions-operations that are simple in one direction and difficult in the other. If we use quantum computers rather than classical ones to solve the factorization problem, it is solved much faster. Quantum computer allows for a couple of minutes to determine the secret key on the public, and the knowledge of the secret key allows you to access the address of the bitcoin network. It turns out that the owner of the quantum computer will be able to break the encryption system with a public key and write off (steal) “coins” from the appropriate address. This feature of quantum computing is the main danger for bitcoin. According to some estimates, the quantum computer will be able to determine the secret key on the open in 2027. Some commentators believe that with the advent of full-fledged quantum computers, the era of cryptocurrencies and blockchain will come to its logical end — the cryptography systems on which cryptocurrencies are based will be compromised, and the cryptocurrencies themselves will become worthless. Allegedly, the first thing that the owner of a quantum computer will do is quickly mine the remaining bitcoins, ethers and other popular crypto-coins. Experts have estimated that bitcoin hacking will require a quantum computer with a capacity of 10 thousand qubits, and it is not so long to wait for it — perhaps ten years, or even less. IBM 50Q System: An IBM cryostat wired for a 50 qubit system. Photo from the www.ibm.com However, not everyone shares this opinion. According to new forecasts, a commercially acceptable version of the quantum computer will not appear until 2040. Many cryptocurrency experts are sure that by this moment developers will be able to prepare and adapt the blockchain to new realities. They will be able to modify the cryptocurrency code and protect the technologies used in it from hacking. Analysts, however, emphasize that although an attacker with a powerful quantum computer will be able to get the secret key from the public, it is impossible to get the public key from the bitcoin address of the recipient of the transaction. The public key is converted to a bitcoin address by several unidirectional hash functions that are resistant to quantum computation. However, in fact, the public key still gets into the network one day. This occurs when the transaction is signed by the sender of the “coin”. Otherwise, the network will not be able to confirm the transaction, because there is no other way to verify the authenticity of the sender’s signature. The widespread fear of a direct threat to bitcoin by quantum computing is exaggerated and comes from ignorance. In fact, using crowdsourcing, blockchain technology solves many problems, including reducing threats to its security from quantum computers. That is why the network based on the blockchain for superior protection network and platform of centralized architecture. Dr. Brennan has analyzed the threat of blockchain technologies by modern systems of quantum computing. He investigated the potential of a quantum computer in terms of the possibility of its use “for manipulating the blockchain in the centralization of hashing power” and assessed the probability of disclosure of the key of the encryption system that underlies the mechanism of protecting users of the blockchain. The results of the study show that the existing developments in the field of quantum computing are very far from the “imaginary possibilities” of quantum technologies — the modern quantum infrastructure is characterized by speed, absolutely insufficient to solve extremely complex problems such as the search for an acceptable time encryption key. At least on the horizon of the next 10 years, the speed of quantum computers will be insufficient compared to the capabilities of modern mining machines.
Bitcoin will not give way before quantum computing.
Can Quantum Computing Take Over Blockchain?
Practice crosses out any theoretical constructions that claim that quantum computing is able to “master” the blockchain. This is due to the limited capabilities of existing technical means and the ongoing development of the blockchain protection system. The technology that can compromise the work of the blockchain is becoming obsolete by the time of its appearance, it is constantly about ten years behind the development of blockchain technology. The head of the laboratory of quantum computing John Martinis from Google also rejected the assumption that quantum computing could pose a direct threat to blockchain systems and cryptocurrencies in the near future. Martinis believes that the process of creating quantum computers will take at least a decade, and the practical implementation of effective quantum computing will require even more time. He believes that the creation of quantum devices “is really problematic and much more difficult than the creation of a classical computer”. From another angle, one of the world’s leading experts in the field of bitcoin and blockchain Andreas Antonopoulos also looked at the problem under consideration. Andreas Antonopoulos official Twitter page. He is convinced that the US NSA and other intelligence agencies will not use a quantum computer against bitcoin, even if they have such weapons. Andreas Antonopoulos said:
“I’m not at all worried that the NSA might have a quantum computer, because the basic security law says: if you have a powerful secret weapon, you do not use it. You need a very significant excuse to use it”.
He cited as an example the decryption by the British cryptographer Alan Turing of the German military machine encryption Telegraph messages “Enigma” during the Second World War. The Germans used this machine, in particular, for secret communication in the Navy. The British government then decided to keep this success in the strictest confidence, and by any means to hide the source of information (it was removed from the communication channels). The British had even deliberately not to prevent the sinking of their ships by the Germans, because as soon as the enemy realizes the compromise of the codes used by him, he immediately takes measures to Refine its technology. The question of the threat of quantum computing is not the existence of a quantum computer, but its power — the number of quantum bits (qubits). Special services at this stage of development can not have enough power to attack the Bitcoin blockchain. However, a really real problem will arise when quantum computers become commercially available, but not so much that everyone can use them in their bitcoin wallet. During this transition period, bitcoin will need to switch to new algorithms. It is not yet clear how this transition will take place. Researchers estimate the exploitability of the ideas of quantum-secured blockchain, the essence of which is that the Central element in the protection technology of the blockchain to make the quantum technology of quantum communication. Quantum communications (or, more precisely, quantum key distribution) guarantee security based on the laws of physics, not on the complexity of solving mathematical problems, as in the case of public-key cryptography. As a result, the quantum blockchain (it can be defined as a set of methods of using quantum technologies for calculations; the work of the quantum blockchain is based on the use of quantum communications to authenticate the participants of operations) will be invulnerable to attacks using a quantum computer. Brennen and Tucker agree that quantum computing, at least on paper, definitely poses a threat to the security of blockchain networks. Feed her fears caused by the injection of panic sensational articles in the media. Tucker believes that the talk that quantum computing poses an immediate threat to the blockchain is distracting from the really important topics for discussion. The quantum threat to bitcoin cannot be completely excluded, but the level of this threat is estimated as minimal, especially if we take into account the high reliability of the network of this cryptocurrency and powerful incentives to ensure the highest level of its security. Perhaps, from all this, it is possible to draw two conclusions. First, bitcoin in the current modification is really vulnerable to quantum computing. Secondly, it is equally obvious that there are and there will be many opportunities in the future to improve it. On the one hand, it is, in particular, alternative systems of cryptographic protection of transactions, and including on the basis of public-key ciphers, on the other — quantum communication systems that guarantee the security of communication without the use of mathematics. So quantum systems promise new means of protection of virtual currency blockchains. If we turn to ordinary money, it can be noted that as technological development is constantly evolving and their means of protection. Remember how to protect against counterfeiting of conventional paper money is constantly coming up with new and unusual technologies. From all this, it follows that from a technical point of view, crypto assets are for a long time, which makes their regulation useful. Material developed by the Legal Department of EdJoWa Holding
LTO Network - Hybrid blockchain built for business
📷 INTRODUCTION 📷 We're celebrating the 10th year of Blockchain technology. During that time, we had a lot of experience. The market is slowly starting to fix itself. One of the biggest reasons for this is the increase in confidence in the market. The market-free elimination of fraud and security holes has made blockchain technology more powerful every day. It is possible to say that there are some difficulties because it is a very new technology. But if you think like me, you accept that block-chain technology is a weapon capable of changing our lives. The blockchain has been used today in banking, artificial intelligence, entertainment, municipalities, shopping and much more. The Fiat that we know is slowly reaching the power to do everything that money can do. The blockchain not only facilitated the use of money, but it also allowed us to use it in a very functional way. In addition, it seems to have done a lot better job in the distribution of income than the fiat money. But the blockchain should go on its way. This amazing technology that changes the way we look at money and banks, comes for more. Problem Analysis 📷 Nowadays, many altcoin blockchains using a limited number of hybrid infrastructure. The most common and most known is Ethereum infrastructure. About 200 altcoins are using this infrastructure. In fact, Binance, the world's largest stock exchange, uses this infrastructure for its own coin, BNB. We should talk about Bitcoin's infrastructure, which dominates 52% of the market, according to Coinmarketcap data before going to the Ethereum. Compared to fiat money, Bitcoin's trading speed and commission fees are really incredible. Bitcoin operations can change blocks before 1 hour after the completion of the approval process. This means that the international fiat is much faster than monetary transactions and can be met with fewer commissions. But ten years after bitcoin's invention, we can say that we are now using much more advanced technology. For example, the Ethereum infrastructure can perform these operations in almost minutes. Projects such as EOS, which is also a blockchain, and other projects aim to shorten these times further. But it is a fact known by everyone that these infrastructures have problems with safety, speed and integration. Because of these vulnerabilities, the Crypto money worth millions of dollars each year is becoming the target of the attacks. This kind of situation that damages the blockchain technology also continues to be a serious obstacle to the realization of the blockchain Revolution. However, new projects, or rather projects that use a solid infrastructure, do not seem likely to encounter these problems in the future. 📷 What is the LTO network ? LTO network is a decentralized and highly efficient blockchain infrastructure that provides maximum efficiency to its users and enables the integration of blockchain Infrastructures into existing systems ready for production. The LTO network project is a very advanced technology product with 10 years of blockchain experience. I remember the days when the project was chosen as the Most Valuable ICO of the year when the ICO was made in 2018. This year, the project has proven itself by providing services to various customers from around the world. Business Process Modeling is a common strategy that small, medium and large enterprises use to maintain their business continuity. Creating a visual proof of a workflow process is a step by step to ensure that it is analyzed, developed and automated. Unlike procedures written in a native language or in a programming language, these models can be understood by both people and computers. For inter-organizational collaboration, there is no modelling to enhance communication. The parties concerned must indicate the process to be used as a binding agreement; it is called a live contract on the LTO platform. The LTO platform creates a temporary blockchain for each live contract. This type of blockchain is not designed as a literal book. 📷 Who are the actors on the LTO Network? 📷 There are 4 different token holders in the LTO network system and they are classified.
Collector and Partners –These people can be recognized as individuals who approve blocks in the system and enable transactions to take place. Not only in the LTO network, but also in the approval of blocks of all blockchain Projects works according to this system. These people make an income by making the approval of the transactions.
Customers - if you are not one of the above-mentioned persons, you are probably a customer. When you use the system to trade, you pay a very small amount of commission. These commission fees are almost nothing compared to other projects.
Active Holders – as with many other systems, you have the opportunity to generate passive income by running a node on the LTO network. It's kind of a passive income opportunity, actually. To run any node, you just hire a server and confirm the LTO network operations via this server.
Inactive Holders - such holders are those who continue their existence without any action on the network, without any block endorsement in any way.
LTO Network Award pool and operating system 📷 We mentioned above that you have a chance to earn passive income through LTO network. These stakes vary according to the token that the customers hold. There are a number of rewards available for keeping these tokens in the wallet. And token holders are rewarded within the framework of the ratios shown below.
If a user has 10% of the total number of tokens on the network and contributes 10% of the total transactions, this token holder's block validation rate will be 105%.
If a user has 10% of the total token supply but does not contribute to any authentication transition based on network operations, the block validation rate will be below 5%.
The stake pool calculation of the LTO network is calculated by the following formula: a number of tokens staked/contribution to the approval of blocks. This process determines your rate of the number of tokens staked. 📷 LTO Blockchain and ERC-20 Wallet 📷 LTO network has 2 different coins. Since these coins are used for different purposes, they may also be used for different purposes. These two different coins can be exchanged with each other through "bridge troll". Odds are 1:1. There is a system called “bridge” between the pools of the main net and ERC-20 tokens, and both pools serve different purposes.:
The LTO network blockchain is designed for the actual use of the network: it is designed for the functionality of the platform and to pay for transactions or payment. This network is used by holders. Or used to confirm transactions on the network.
ERC-20 is the nature of the companies that will make a more new entry into the system, to ensure faster entry into the system was established. This token is used to adapt the company to the LTO network because its liquid speed is faster.
📷 4 Main Dynamics of LTO Network 📷 LTO network has shaped the project according to 4 main dynamics. These are development, growth, shaking-out and maturity. The first of these stages, the relatively high level of Development, will be shared after token distribution. Those who buy this token early will be rewarded as a reward by bringing it to the "net zero" point mentioned in the chart. At the stage of Growth, the passive customer ratio will increase to the extent that it is adapted to the platform. The speed of transactions will increase as you go to the "net zero" point. "Net zero" incentives in the Shake-out will increase the founders. The entrance to the prize pool will be a bit narrower as the stakes increase. In the process of Maturing, the market will grow as the majority of customers become partners. However, passive holders will be rewarded by the system with a much wider expansion of the stock rewards. CONCLUSION 📷 Although the LTO network seems very complex, it is much more understandable when viewed closely. Since it has a much more advanced structure than other blockchain technologies, there are many opportunities to be gained from it. The system is already listed in Coinmarketcap and is traded on many stock markets. Anyone who wants to invest in this wonderful project and using the Node registered in the system can provide the opportunity to earn passive income. In addition, if you already have a cryptosystem, you can negotiate with the LTO network and get the chance to improve the quality of your business. TOKEN ALLOCATION 📷 📷 MEET THE TEAM 📷 📷 For more information: 📷 📷 WEBSITE:https://lto.network 📷 TELEGRAM:https://t.me/joinchat/AJWQTUDKtDlsuGHVFb40eQ 📷 WHITEPAPER:https://lto.network/documents/LTO%20Network%20-%20Technical%20Paper.pdf 📷 TWITTER:https://twitter.com/ltonetwork 📷 MEDIUM:https://medium.com/ltonetwork 📷 REDDIT:https://www.reddit.com/livecontracts/ 📷 YOUTUBE:https://www.youtube.com/channel/UCaHcF-xterKYTKSpY4xgKiw 📷 GITHUB:https://github.com/legalthings 📷 LTO Wallet Adress: 3Jkjdy2bViwGsML6emPDAGBF5XC7MwYCq4g MEW: 0x30Da745c024923B55f3a73E530e18382eF2130eB Telegram:@nuxxorcoin
Did you know that LISK uses Schnorr signature-based Ed25519 scheme which is more secure, much faster, more scalable than secp256k1 which is used by Bitcoin, Ethereum, Stratis
Schnorr signatures have been praised by Bitcoin developers for a while Adam Back admitted it was more secure https://bitcointalk.org/index.php?topic=511074.msg5727641#msg5727641 And it is much faster (scalable for verifying hundred thousands of transactions per second) https://bitcointalk.org/index.php?topic=103172.0 DJB and friends claim that with their ed25519 curve (the "ed" is for Edwards) and careful implementation they can do batch verification of 70,000+ signatures per second on a cheap quad-core Intel Westmere chip, which is now several generations old. Given advances in CPUs over time, it seems likely that in the very near future the cited software will be capable of verifying many hundreds of thousands of signatures per second even if you keep the core count constant. But core counts are not constant - it seems likely that in 10 years or so 24-32 core chips will be standard even on consumer desktops. At that point a million signatures per second or more doesn't sound unreasonable. Gavin Andresen, the former Bitcoin Chief Scientist want to support it in Bitcoin https://www.reddit.com/Bitcoin/comments/2jw5pm/im_gavin_andresen_chief_scientist_at_the_bitcoin/clfp3xj/ Bitcoin developers discussed to include it https://github.com/bitcoin-core/secp256k1/pull/212 However, it is still in wishlist https://en.bitcoin.it/wiki/Softfork_wishlist Ed25519 is used in Tahoe-FS, one of most respected crypto project https://moderncrypto.org/mail-archive/curves/2014/000069.html LISK is IoT friendly The good feature of Schnorr signature is that by design it does not require lot of computations on the signer side. Therefore, you can use it even on a computationally weak platform (think of a smart card or RFID), or on a platform with no hardware support for multiple precision arithmetic. Advantages of Schnorr signatures According to David Harding, Schnorr signatures can bring many benefits Smaller multisig transactions Slightly smaller for all transactions Plausible deniability for multisig Plausible deniability of authorized parties using a third-party organizer (which doesn't need to be trusted with private keys), it's possible to prevent signers from knowing whether their private key is part of the set of signing keys. Theoretical better security properties: Also, the ed25519 page linked above describes several ways it is resistant to side-channel attacks, which can allow hardware wallets to operate safely in less secure environments. Faster signature verification: it likely takes fewer CPU cycles to verify an ed25519 Schnorr signature than a secp256k1 ECDSA signature. Multi-crypto multisig: with two (slightly) different cryptosystems to choose from, high-security users can create 2-of-2 multisig pubkey scripts that require both ECDSA and Schnorr signatures, so their bitcoins can't be stolen if only one cryptosystem is broken. https://bitcoin.stackexchange.com/questions/34288/what-are-the-implications-of-schnorr-signatures Scalable multisig transactions The magic of Schnorr signatures is most evident in their ability to aggregate signatures from multiple inputs into a single one to be validated for every individual transactions. The scaling implications of this are obvious: aggregation allows for non-trivial savings in terms of transmission, validation & storage for every peer on the network. The chart below illustrates the historical impact a switch to Schnorr signatures would have had in terms of space savings on the blockchain. (Alex B.) Infamous malleability is non-issue in LISK Provably no inherent signature malleability, while ECDSA has a known malleability, and lacks a proof that no other forms exist. Note that Witness Segregation already makes signature malleability not result in transaction malleability, however. https://www.elementsproject.org/elements/schnorr-signatures/ Bitcoin has malleability bugs
Why having Shadowcash, Dash, and Zcash in the cryptocurrencies section is incredibly irresponsible.
Cryptocurrencies as many of you know allow people, for the first time, to be in control of their money. This is important for a multitude of reasons related to personal freedoms and expressions, but not all cryptocurrencies can be trusted so blindly. Bitcoin obviously is the most famous and is well received among freedom-seeking individuals, but Bitcoin is not fungible and all transactions are publicly viewable similar to publishing your bank statements online which is not very good for privacy. Now the reason why I think the aforementioned cryptocurrencies in the title are very poor candidates to be published on https://www.privacytools.io is because they have very poor claims for security and privacy, two things many of us here value very much, and do not make any assurances to the bold claims they make for privacy and security which relied on and encouraged for use by privacytools.io. Shadowcash was created as a fork of the Bitcoin codebase with ring-signtaures implemented poorly as an optional sending feature that was meant to obscure transactions. One of Monero's cryptographers, Shen Noether, had been reviewing Shadowcash's ring-signature implementation and found that it was completely and irrevocably broken. Further reading can be found on this topic here https://shnoe.wordpress.com/2016/02/11/de-anonymizing-shadowcash-and-oz-coin/ and here https://github.com/ShenNoetheDeanon Such lack of understanding of cryptography and security from developers who are creating applications regular people would be relying on for their personal privacy should be no candidate on an esteemed website like https://www.privacytools.io Not to mention a shady release that equated to over 6 million coins being mined in the first two weeks of the network's lifetime and quickly pivoting to a Proof-of-Stake network that no longer mints new coins. This event very quickly and unfairly distributed large portions of the currency to the developers of Shadowcash questioning the legitimacy of this cryptocurrency and whether or not their goal of the project is privacy and security at all. Dash(formerly known as Darkcoin) was also a fork of Bitcoin and to distinguish itself it implemented a modified version of CoinJoin for optional use into the protocol which as history's shown with Bitcoin's version of CoinJoin makes zero assurances to privacy because the blockchain and all their transactions are still publicly auditable by following the previous outputs. In the Dash network there are centralized nodes called "Master Nodes" which execute many operations in the network from CoinJoin to the locking of transactions for faster "confirmations". Dash is touted as a private cryptocurrency that has even less privacy than Bitcoin http://weuse.cash/2016/10/26/warning-dash-privacy-is-worse-than-bitcoin/ The release of Darkcoin(later rebranded to Dash), as described in the previous link, involved mining over 2 millions coins(roughly 30% of the coin supply) over the course of two days. This is similar to Shadowcash's launch which questions the legitimacy of such a "privacy-focused" project and should warrant zero trust to the claims they make for privacy or security for users of privacytools.io On to Zcash another cryptocurrency who forked Bitcoin's code for the goal of creating a private and secure cryptocurrency that utilizes a very brand new cryptosystem called zk-snarks. I won't go into much detail about how zk-snarks work because this topic alone goes far beyond the scope of this post. Essentially Zcash's optional feature for sending private transactions break the link between sender and recepient to obscure transaction data on the blockchain. Now zk-snarks are an incredibly experimental cryptosystem that has not even been thoroughly peer-reviewed by the academia community. This fact should alone warrant caution to the use of Zcash for privacy because cryptography, barely three years old, that has not has stood the test of time to be considered secure and reliable is under very poor assumptions to be relying on for privacy and security for privacytools.io users. Not to mention a trusted-setup by two employees of Zcash Electric Coin Company and four other individuals to generate zk-snarks parameters for the launch of Zcash. This blind trust that is required for Zcash allows for infinite coins to be invisibly created in the network if these six individuals colluded or were using compromised computers for the generation. To end this section the corporation, Zcash Electric Coin Company, will consequently receive 20% of all block rewards for the next four years. Such corporate interest and unproven cryptographic algorithms can not be trusted by individuals who are seeking privacy and security on privacytools.io To reiterate on these three cryptocurrencies: snake-oil cryptography that is unverified to be secure by academic researchers and make false claims to privacy and security should not be published on a website like privacytools.io that encourages sound, secure privacy-oriented applications to be used over traditional options. Now the argument for why Monero is a strong candidate to be featured on privacytools.io alonsgide Bitcoin is because Monero does not make false claims to security and privacy. Monero has been thoroughly reviewed by academic researchers(cryptonote review, ledger journal publication, researchers from University College London, and more here). Monero uses widely acclaimed and reviewed cryptography created by Daniel J. Bernstein, who is an outspoken cryptographer that is a huge supporter of privacy-related projects. Monero is, unlike all other cryptocurrencies, private-by-default allowing its users to freely transact on its network without adversaries surveilling all past and present transactions. Monero is in addition a thriving open-source project that is also creating an alternative security-focused i2p router. The goals of Monero are very inline with what privacy-seeking individuals desire. We simply want to reclaim our privacy in a world where privacy is relentlessly attacked and criticized. I hope the administrators of https://www.privacytools.io can re-evaluate the cryptocurrencies featured in the cryptocurrency section because I can not emphasize enough how detrimental such recommendations can be for using a cryptocurrency that may greatly threaten users privacy. Thank you.
Blockchain, the technology of distributed ledger that supports Bitcoin, is the wave of the financial future. By transforming industries with advanced and optimized architecture, this ingenious technology eliminates intermediaries in many essential services, reducing costs and increasing efficiency. However, blockchain technology and its applications are still in their early years, so different concerns do exist. https://smartcryptosolution.org/
Businesses around the world are exploring innovative ways to harness the disruptive power of blockchain technology to securely exchange stocks and assets.
Several government authorities, institutions, and businesses have tested the blockchain development technology and found it incredibly safe and immutable. Is blockchain technology powerful enough to stop cybercriminals, improve security and transparency in all transactions?
Decentralization, cryptography and consent are the three defining characteristics of blockchain technology contributing to its immutability.
When you own a bitcoin, use digital keys (a public and private key pair) to confirm your ownership and access to funds in a secure cryptosystem.
Whenever there is a digital asset transaction, its details are stored in a block that includes the data, the hash and the hash of the previous block.
The data present in a block vary depending on the type of blockchain.
For a bitcoin transaction, the data would be the details of a transaction such as a sender, the recipient and the value of coins.
The hash is a unique string of encrypted data defined to identify the corresponding block and its data.
This hash is processed with new transactions to create a new hash for the next chain block.
Because each block is bound to its previous block by hashing, any change in any part of the data would change all the hashes, making all subsequent blocks invalid and false.
Each time a new transaction is launched in a public blockchain, it is transmitted via an open peer-to-peer (P2P) network, accessible to all.
A group of people working on the peer-to-peer bitcoin network, also called miners, uses its computational power to record transactions and verify their accuracy by solving a complex cryptographic puzzle, also called "proof of work".
Whenever the miners solve the puzzle and mines the block, they must be validated by the remaining nodes in accordance with the consent protocol.
If the block is validated, it will be added to the blockchain of the network and the miner who will have solved the puzzle will be rewarded with a cryptocurrency blockchain.
On the other hand, private blockchains operate on authorized networks in which only certain access rights are assigned to the selected entities and create new transactions on the channel.
All blocks in the blockchain are linked in chronological order and are monitored via P2P nodes, making record tampering or network corruption highly uncertain.
If someone needs to hack the bitcoin blockchain, then they have to hack not just a block, but all the previous blocks, and then repeat the work validation chain on all the computers connected via the peer-to-peer network in a very short time, which is almost impossible.
Does this mean that blockchain technology is extremely unalterable? Well no. We should rather consider scenarios in which the blockchain can be altered. If a miner or a mining pool controls more than half of the hash power on a blockchain network, the power would be reconcentrated into a single entity, thus opening the door for attacks and illegal gains. Conclusion In conclusion, it can be said that absolute immutability does not exist. But this decentralized and distributed digital blockchain ledger has enormous potential to meet the ever-changing needs of different sectors. And without a doubt, the cryptocurrency market will be safer and more reliable in the near future.
Abstract We propose a proof of work protocol that computes the discrete logarithm of an element in a cyclic group. Individual provers generating proofs of work perform a distributed version of the Pollard rho algorithm. Such a protocol could capture the computational power expended to construct proof-of-work-based blockchains for a more useful purpose, as well as incentivize advances in hardware, software, or algorithms for an important cryptographic problem. We describe our proposed construction and elaborate on challenges and potential trade-offs that arise in designing a practical proof of work. References
SpaceMint: A cryptocurrency based on proofs of space. In: FC’18. Springer (2018)
Back, A.: Hashcash-a denial of service counter-measure (2002)
Ball, M., Rosen, A., Sabin, M., Vasudevan, P.N.: Proofs of work from worst-case assumptions. In: CRYPTO 2018. Springer International Publishing (2018)
Barbulescu, R., Gaudry, P., Joux, A., Thom´e, E.: A heuristic quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic. In: EUROCRYPT’14 (2014)
Barker, E., Chen, L., Roginsky, A., Vassilev, A., Davis, R.: SP 800-56A Revision 3. Recommendation for pair-wise key establishment schemes using discrete logarithm cryptography. National Institute of Standards & Technology (2018)
Biryukov, A., Pustogarov, I.: Proof-of-work as anonymous micropayment: Rewarding a Tor relay. In: FC’15. Springer (2015)
Bitansky, N., Canetti, R., Chiesa, A., Goldwasser, S., Lin, H., Rubinstein, A., Tromer, E.: The hunting of the SNARK. Journal of Cryptology 30(4) (2017)
Boneh, D., Bonneau, J., B¨unz, B., Fisch, B.: Verifiable delay functions. In: Annual International Cryptology Conference. pp. 757–788. Springer (2018)
Bos, J.W., Kaihara, M.E., Kleinjung, T., Lenstra, A.K., Montgomery, P.L.: Solving a 112-bit prime elliptic curve discrete logarithm problem on game consoles using sloppy reduction. International Journal of Applied Cryptography 2(3) (2012)
DJB and friends claim that with their ed25519 curve (the "ed" is for Edwards) and careful implementation they can do batch verification of 70,000+ signatures per second on a cheap quad-core Intel Westmere chip, which is now several generations old. Given advances in CPUs over time, it seems likely that in the very near future the cited software will be capable of verifying many hundreds of thousands of signatures per second even if you keep the core count constant. But core counts are not constant - it seems likely that in 10 years or so 24-32 core chips will be standard even on consumer desktops. At that point a million signatures per second or more doesn't sound unreasonable.
The good feature of Schnorr signature is that by design it does not require lot of computations on the signer side. Therefore, you can use it even on a computationally weak platform (think of a smart card or RFID), or on a platform with no hardware support for multiple precision arithmetic.
Advantages of Schnorr signatures According to David Harding, Schnorr signatures can bring many benefits
Smaller multisig transactions
Slightly smaller for all transactions
Plausible deniability for multisig
Plausible deniability of authorized parties using a third-party organizer (which doesn't need to be trusted with private keys), it's possible to prevent signers from knowing whether their private key is part of the set of signing keys.
Theoretical better security properties: Also, the ed25519 page linked above describes several ways it is resistant to side-channel attacks, which can allow hardware wallets to operate safely in less secure environments.
Faster signature verification: it likely takes fewer CPU cycles to verify an ed25519 Schnorr signature than a secp256k1 ECDSA signature.
Multi-crypto multisig: with two (slightly) different cryptosystems to choose from, high-security users can create 2-of-2 multisig pubkey scripts that require both ECDSA and Schnorr signatures, so their bitcoins can't be stolen if only one cryptosystem is broken.
The magic of Schnorr signatures is most evident in their ability to aggregate signatures from multiple inputs into a single one to be validated for every individual transactions. The scaling implications of this are obvious: aggregation allows for non-trivial savings in terms of transmission, validation & storage for every peer on the network. The chart below illustrates the historical impact a switch to Schnorr signatures would have had in terms of space savings on the blockchain. (Alex B.)
Infamous malleability is non-issue in LISK
Provably no inherent signature malleability, while ECDSA has a known malleability, and lacks a proof that no other forms exist. Note that Witness Segregation already makes signature malleability not result in transaction malleability, however. https://www.elementsproject.org/elements/schnorr-signatures/
Abstract A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone. References  W. Dai, "b-money," http://www.weidai.com/bmoney.txt, 1998.  H. Massias, X.S. Avila, and J.-J. Quisquater, "Design of a secure timestamping service with minimal trust requirements," In 20th Symposium on Information Theory in the Benelux, May 1999.  S. Haber, W.S. Stornetta, "How to time-stamp a digital document," In Journal of Cryptology, vol 3, no 2, pages 99-111, 1991.  D. Bayer, S. Haber, W.S. Stornetta, "Improving the efficiency and reliability of digital time-stamping," In Sequences II: Methods in Communication, Security and Computer Science, pages 329-334, 1993.  S. Haber, W.S. Stornetta, "Secure names for bit-strings," In Proceedings of the 4th ACM Conference on Computer and Communications Security, pages 28-35, April 1997.  A. Back, "Hashcash - a denial of service counter-measure," http://www.hashcash.org/papers/hashcash.pdf, 2002.  R.C. Merkle, "Protocols for public key cryptosystems," In Proc. 1980 Symposium on Security and Privacy, IEEE Computer Society, pages 122-133, April 1980.  W. Feller, "An introduction to probability theory and its applications," 1957.
Abstract We introduce the notion of two-factor signatures (2FS), a generalization of a two-out-of-two threshold signature scheme in which one of the parties is a hardware token which can store a high-entropy secret, and the other party is a human who knows a low-entropy password. The security (unforgeability) property of 2FS requires that an external adversary corrupting either party (the token or the computer the human is using) cannot forge a signature. This primitive is useful in contexts like hardware cryptocurrency wallets in which a signature conveys the authorization of a transaction. By the above security property, a hardware wallet implementing a two-factor signature scheme is secure against attacks mounted by a malicious hardware vendor; in contrast, all currently used wallet systems break under such an attack (and as such are not secure under our definition). We construct efficient provably-secure 2FS schemes which produce either Schnorr signature (assuming the DLOG assumption), or EC-DSA signatures (assuming security of EC-DSA and the CDH assumption) in the Random Oracle Model, and evaluate the performance of implementations of them. Our EC-DSA based 2FS scheme can directly replace currently used hardware wallets for Bitcoin and other major cryptocurrencies to enable security against malicious hardware vendors. References  Jes´us F Almansa, Ivan Damg˚ard, and Jesper Buus Nielsen. Simplified threshold RSA with adaptive and proactive security. In Eurocrypt, volume 4004, pages 593–611. Springer, 2006.  Dan Boneh, Xuhua Ding, Gene Tsudik, and Chi-Ming Wong. A method for fast revocation of public key certificates and security capabilities. In USENIX Security Symposium, pages 22–22, 2001.  Jan Camenisch, Anja Lehmann, Gregory Neven, and Kai Samelin. Virtual smart cards: how to sign with a password and a server, 2016.  Yvo Desmedt and Yair Frankel. Threshold cryptosystems. In Advances in Cryptology – CRYPTO 1989, pages 307–315. Springer, 1990.  J. Doerner, Y. Kondi, E. Lee, and a. shelat. Secure two-party threshold ECDSA from ECDSA assumptions. In 2018 IEEE Symposium on Security and Privacy (SP), pages 595–612, 2018.  Rosario Gennaro and Steven Goldfeder. Fast multiparty threshold ecdsa with fast trustless setup. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 1179–1194. ACM, 2018.  Rosario Gennaro, Stanis law Jarecki, Hugo Krawczyk, and Tal Rabin. Robust and efficient sharing of RSA functions. In Advances in Cryptology – CRYPTO 1996, pages 157–172. Springer, 1996.  Steven Goldfeder, Rosario Gennaro, Harry Kalodner, Joseph Bonneau, Joshua A Kroll, Edward W Felten, and Arvind Narayanan. Securing bitcoin wallets via a new DSA/ECDSA threshold signature scheme, 2015.  Yehuda Lindell. Fast secure two-party ECDSA signing. In Advances in Cryptology – CRYPTO 2017, pages 613–644. Springer, 2017.  Yehuda Lindell and Ariel Nof. Fast secure multiparty ecdsa with practical distributed key generation and applications to cryptocurrency custody. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 1837–1854. ACM, 2018.  Philip MacKenzie and Michael K Reiter. Delegation of cryptographic servers for capture-resilient devices. Distributed Computing, 16(4):307–327, 2003.  Philip MacKenzie and Michael K Reiter. Networked cryptographic devices resilient to capture. International Journal of Information Security, 2(1):1–20, 2003.  Antonio Marcedone, Rafael Pass, and abhi shelat. Minimizing trust in hardware wallets with two factor signatures. Cryptology ePrint Archive, Report 2018/???, 2018.  Microchip. Atecc608a datasheet, 2018.  Antonio Nicolosi, Maxwell N Krohn, Yevgeniy Dodis, and David Mazieres. Proactive two-party signatures for user authentication. In NDSS, 2003.  Marek Palatinus, Pavol Rusnak, Aaron Voisine, and Sean Bowe. Mnemonic code for generating deterministic keys (bip39). https://github.com/bitcoin/bips/blob/mastebip-0039.mediawiki.  Tal Rabin. A simplified approach to threshold and proactive RSA. In Advances in Cryptology – CRYPTO 1998, pages 89–104. Springer, 1998.  T.C. Sottek. Nsa reportedly intercepting laptops purchased online to install spy malware, December 2013. [Online; posted 29-December-2013; https://www.theverge.com/2013/12/29/5253226/nsacia-fbi-laptop-usb-plant-spy].
Abstract We construct new multi-signature schemes that provide new functionality. Our schemes are designed to reduce the size of the Bitcoin blockchain, but are useful in many other settings where multi-signatures are needed. All our constructions support both signature compression and public-key aggregation. Hence, to verify that a number of parties signed a common message m, the verifier only needs a short multi-signature, a short aggregation of their public keys, and the message m. We give new constructions that are derived from Schnorr signatures and from BLS signatures. Our constructions are in the plain public key model, meaning that users do not need to prove knowledge or possession of their secret key. In addition, we construct the first short accountable-subgroup multi-signature (ASM) scheme. An ASM scheme enables any subset S of a set of n parties to sign a message m so that a valid signature discloses which subset generated the signature (hence the subset S is accountable for signing m). We construct the first ASM scheme where signature size is only O(k) bits over the description of S, where k is the security parameter. Similarly, the aggregate public key is only O(k) bits, independent of n. The signing process is non-interactive. Our ASM scheme is very practical and well suited for compressing the data needed to spend funds from a t-of-n Multisig Bitcoin address, for any (polynomial size) t and n. References
Ahn, J.H., Green, M., Hohenberger, S.: Synchronized aggregate signatures: new definitions, constructions and applications. In: Al-Shaer, E., Keromytis, A.D., Shmatikov, V. (eds.) ACM CCS 10: 17th Conference on Computer and Communications Security. pp. 473–484. ACM Press, Chicago, Illinois, USA (Oct 4–8, 2010)
Bagherzandi, A., Cheon, J.H., Jarecki, S.: Multisignatures secure under the discrete logarithm assumption and a generalized forking lemma. In: Ning, P., Syverson, P.F., Jha, S. (eds.) ACM CCS 08: 15th Conference on Computer and Communications Security. pp. 449–458. ACM Press, Alexandria, Virginia, USA (Oct 27–31, 2008)
Bagherzandi, A., Jarecki, S.: Multisignatures using proofs of secret key possession, as secure as the Diffie-Hellman problem. In: Ostrovsky, R., Prisco, R.D., Visconti, I. (eds.) SCN 08: 6th International Conference on Security in Communication Networks. Lecture Notes in Computer Science, vol. 5229, pp. 218–235. Springer, Heidelberg, Germany, Amalfi, Italy (Sep 10–12, 2008)
Bansarkhani, R.E., Sturm, J.: An efficient lattice-based multisignature scheme with applications to bitcoins. In: Foresti, S., Persiano, G. (eds.) CANS 16: 15th International Conference on Cryptology and Network Security. Lecture Notes in Computer Science, vol. 10052, pp. 140–155. Springer, Heidelberg, Germany, Milan, Italy (Nov 14–16, 2016)
Barreto, P.S.L.M., Lynn, B., Scott, M.: On the selection of pairing-friendly groups. In: Matsui, M., Zuccherato, R.J. (eds.) SAC 2003: 10th Annual International Workshop on Selected Areas in Cryptography. Lecture Notes in Computer Science, vol. 3006, pp. 17–25. Springer, Heidelberg, Germany, Ottawa, Ontario, Canada (Aug 14–15, 2004)
Bellare, M., Namprempre, C., Neven, G.: Unrestricted aggregate signatures. In: Arge, L., Cachin, C., Jurdzinski, T., Tarlecki, A. (eds.) ICALP 2007: 34th International Colloquium on Automata, Languages and Programming. Lecture Notes in Computer Science, vol. 4596, pp. 411–422. Springer, Heidelberg, Germany, Wroclaw, Poland (Jul 9–13, 2007)
Bellare, M., Namprempre, C., Pointcheval, D., Semanko, M.: The one-more-RSAinversion problems and the security of Chaum’s blind signature scheme. Journal of Cryptology 16(3), 185–215 (Jun 2003)
Bellare, M., Neven, G.: Multi-signatures in the plain public-key model and a general forking lemma. In: Juels, A., Wright, R.N., Vimercati, S. (eds.) ACM CCS 06: 13th Conference on Computer and Communications Security. pp. 390–399. ACM Press, Alexandria, Virginia, USA (Oct 30 – Nov 3, 2006)
Boldyreva, A.: Threshold signatures, multisignatures and blind signatures based on the gap-Diffie-Hellman-group signature scheme. In: Desmedt, Y. (ed.) PKC 2003: 6th International Workshop on Theory and Practice in Public Key Cryptography. Lecture Notes in Computer Science, vol. 2567, pp. 31–46. Springer, Heidelberg, Germany, Miami, FL, USA (Jan 6–8, 2003)
Boldyreva, A., Gentry, C., O’Neill, A., Yum, D.H.: Ordered multisignatures and identity-based sequential aggregate signatures, with applications to secure routing. In: Ning, P., di Vimercati, S.D.C., Syverson, P.F. (eds.) ACM CCS 07: 14th Conference on Computer and Communications Security. pp. 276–285. ACM Press, Alexandria, Virginia, USA (Oct 28–31, 2007)
Boneh, D., Gentry, C., Lynn, B., Shacham, H.: Aggregate and verifiably encrypted signatures from bilinear maps. In: Biham, E. (ed.) Advances in Cryptology – EUROCRYPT 2003. Lecture Notes in Computer Science, vol. 2656, pp. 416–432. Springer, Heidelberg, Germany, Warsaw, Poland (May 4–8, 2003)
Boneh, D., Lynn, B., Shacham, H.: Short signatures from the Weil pairing. In: Boyd, C. (ed.) Advances in Cryptology – ASIACRYPT 2001. Lecture Notes in Computer Science, vol. 2248, pp. 514–532. Springer, Heidelberg, Germany, Gold Coast, Australia (Dec 9–13, 2001)
Brogle, K., Goldberg, S., Reyzin, L.: Sequential aggregate signatures with lazy verification from trapdoor permutations - (extended abstract). In: Wang, X., Sako, K. (eds.) Advances in Cryptology – ASIACRYPT 2012. Lecture Notes in Computer Science, vol. 7658, pp. 644–662. Springer, Heidelberg, Germany, Beijing, China (Dec 2–6, 2012)
Burmester, M., Desmedt, Y., Doi, H., Mambo, M., Okamoto, E., Tada, M., Yoshifuji, Y.: A structured ElGamal-type multisignature scheme. In: Imai, H., Zheng, Y. (eds.) PKC 2000: 3rd International Workshop on Theory and Practice in Public Key Cryptography. Lecture Notes in Computer Science, vol. 1751, pp. 466–483. Springer, Heidelberg, Germany, Melbourne, Victoria, Australia (Jan 18–20, 2000)
Castelluccia, C., Jarecki, S., Kim, J., Tsudik, G.: A robust multisignatures scheme with applications to acknowledgment aggregation. In: Blundo, C., Cimato, S. (eds.) SCN 04: 4th International Conference on Security in Communication Networks. Lecture Notes in Computer Science, vol. 3352, pp. 193–207. Springer, Heidelberg, Germany, Amalfi, Italy (Sep 8–10, 2005)
Chang, C.C., Leu, J.J., Huang, P.C., Lee, W.B.: A scheme for obtaining a message from the digital multisignature. In: Imai, H., Zheng, Y. (eds.) PKC’98: 1st International Workshop on Theory and Practice in Public Key Cryptography. Lecture Notes in Computer Science, vol. 1431, pp. 154–163. Springer, Heidelberg, Germany, Pacifico Yokohama, Japan (Feb 5–6, 1998)
Coron, J.S., Naccache, D.: Boneh et al.’s k-element aggregate extraction assumption is equivalent to the Diffie-Hellman assumption. In: Laih, C.S. (ed.) Advances in Cryptology – ASIACRYPT 2003. Lecture Notes in Computer Science, vol. 2894, pp. 392–397. Springer, Heidelberg, Germany, Taipei, Taiwan (Nov 30 – Dec 4, 2003)
Drijvers, M., EdalatNejad, K., Ford, B., Neven, G.: Okamoto beats Schnorr: On the provable security of multi-signatures. Cryptology ePrint Archive, Report 2018/417 (2018), https://eprint.iacr.org/2018/417
Fuentes-Casta˜neda, L., Knapp, E., Rodr´ıguez-Henr´ıquez, F.: Faster hashing to ð2. In: Miri, A., Vaudenay, S. (eds.) SAC 2011: 18th Annual International Workshop on Selected Areas in Cryptography. Lecture Notes in Computer Science, vol. 7118, pp. 412–430. Springer, Heidelberg, Germany, Toronto, Ontario, Canada (Aug 11–12, 2012)
Gentry, C., O’Neill, A., Reyzin, L.: A unified framework for trapdoor-permutationbased sequential aggregate signatures. In: Abdalla, M., Dahab, R. (eds.) PKC 2018: 21st International Conference on Theory and Practice of Public Key Cryptography, Part II. Lecture Notes in Computer Science, vol. 10770, pp. 34–57. Springer, Heidelberg, Germany, Rio de Janeiro, Brazil (Mar 25–29, 2018)
Gentry, C., Ramzan, Z.: Identity-based aggregate signatures. In: Yung, M., Dodis, Y., Kiayias, A., Malkin, T. (eds.) PKC 2006: 9th International Conference on Theory and Practice of Public Key Cryptography. Lecture Notes in Computer Science, vol. 3958, pp. 257–273. Springer, Heidelberg, Germany, New York, NY, USA (Apr 24–26, 2006)
Hardjono, T., Zheng, Y.: A practical digital multisignature scheme based on discrete logarithms. In: Seberry, J., Zheng, Y. (eds.) Advances in Cryptology – AUSCRYPT’92. Lecture Notes in Computer Science, vol. 718, pp. 122–132. Springer, Heidelberg, Germany, Gold Coast, Queensland, Australia (Dec 13–16, 1993)
Harn, L.: Group-oriented (t, n) threshold digital signature scheme and digital multisignature. IEE Proceedings-Computers and Digital Techniques 141(5), 307–313 (1994)
Horster, P., Michels, M., Petersen, H.: Meta-multisignature schemes based on the discrete logarithm problem. In: Information Securitythe Next Decade. pp. 128–142. Springer (1995)
Itakura, K., Nakamura, K.: A public-key cryptosystem suitable for digital multisignatures. Tech. rep., NEC Research and Development (1983)
Komano, Y., Ohta, K., Shimbo, A., Kawamura, S.: Formal security model of multisignatures. In: Katsikas, S.K., Lopez, J., Backes, M., Gritzalis, S., Preneel, B. (eds.) ISC 2006: 9th International Conference on Information Security. Lecture Notes in Computer Science, vol. 4176, pp. 146–160. Springer, Heidelberg, Germany, Samos Island, Greece (Aug 30 – Sep 2, 2006)
Le, D.P., Bonnecaze, A., Gabillon, A.: Multisignatures as secure as the DiffieHellman problem in the plain public-key model. In: Shacham, H., Waters, B. (eds.) PAIRING 2009: 3rd International Conference on Pairing-based Cryptography. Lecture Notes in Computer Science, vol. 5671, pp. 35–51. Springer, Heidelberg, Germany, Palo Alto, CA, USA (Aug 12–14, 2009)
Li, C.M., Hwang, T., Lee, N.Y.: Threshold-multisignature schemes where suspected forgery implies traceability of adversarial shareholders. In: Santis, A.D. (ed.) Advances in Cryptology – EUROCRYPT’94. Lecture Notes in Computer Science, vol. 950, pp. 194–204. Springer, Heidelberg, Germany, Perugia, Italy (May 9–12, 1995)
Lu, S., Ostrovsky, R., Sahai, A., Shacham, H., Waters, B.: Sequential aggregate signatures and multisignatures without random oracles. In: Vaudenay, S. (ed.) Advances in Cryptology – EUROCRYPT 2006. Lecture Notes in Computer Science, vol. 4004, pp. 465–485. Springer, Heidelberg, Germany, St. Petersburg, Russia (May 28 – Jun 1, 2006)
Lysyanskaya, A., Micali, S., Reyzin, L., Shacham, H.: Sequential aggregate signatures from trapdoor permutations. In: Cachin, C., Camenisch, J. (eds.) Advances in Cryptology – EUROCRYPT 2004. Lecture Notes in Computer Science, vol. 3027, pp. 74–90. Springer, Heidelberg, Germany, Interlaken, Switzerland (May 2–6, 2004)
Ma, C., Weng, J., Li, Y., Deng, R.: Efficient discrete logarithm based multisignature scheme in the plain public key model. Designs, Codes and Cryptography 54(2), 121–133 (2010)
Merkle, R.C.: A digital signature based on a conventional encryption function. In: Pomerance, C. (ed.) Advances in Cryptology – CRYPTO’87. Lecture Notes in Computer Science, vol. 293, pp. 369–378. Springer, Heidelberg, Germany, Santa Barbara, CA, USA (Aug 16–20, 1988)
Micali, S., Ohta, K., Reyzin, L.: Accountable-subgroup multisignatures: Extended abstract. In: ACM CCS 01: 8th Conference on Computer and Communications Security. pp. 245–254. ACM Press, Philadelphia, PA, USA (Nov 5–8, 2001)
Michels, M., Horster, P.: On the risk of disruption in several multiparty signature schemes. In: International Conference on the Theory and Application of Cryptology and Information Security. pp. 334–345. Springer (1996)
Neven, G.: Efficient sequential aggregate signed data. In: Smart, N.P. (ed.) Advances in Cryptology – EUROCRYPT 2008. Lecture Notes in Computer Science, vol. 4965, pp. 52–69. Springer, Heidelberg, Germany, Istanbul, Turkey (Apr 13–17, 2008)
Ohta, K., Okamoto, T.: A digital multisignature scheme based on the Fiat-Shamir scheme. In: Imai, H., Rivest, R.L., Matsumoto, T. (eds.) Advances in Cryptology – ASIACRYPT’91. Lecture Notes in Computer Science, vol. 739, pp. 139–148. Springer, Heidelberg, Germany, Fujiyoshida, Japan (Nov 11–14, 1993)
Ohta, K., Okamoto, T.: Multi-signature schemes secure against active insider attacks. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences 82(1), 21–31 (1999)
Okamoto, T.: Provably secure and practical identification schemes and corresponding signature schemes. In: Brickell, E.F. (ed.) Advances in Cryptology – CRYPTO’92. Lecture Notes in Computer Science, vol. 740, pp. 31–53. Springer, Heidelberg, Germany, Santa Barbara, CA, USA (Aug 16–20, 1993)
Park, S., Park, S., Kim, K., Won, D.: Two efficient RSA multisignature schemes. In: Han, Y., Okamoto, T., Qing, S. (eds.) ICICS 97: 1st International Conference on Information and Communication Security. Lecture Notes in Computer Science, vol. 1334, pp. 217–222. Springer, Heidelberg, Germany, Beijing, China (Nov 11–14, 1997)
Pointcheval, D., Stern, J.: Security arguments for digital signatures and blind signatures. Journal of Cryptology 13(3), 361–396 (2000)
Ristenpart, T., Yilek, S.: The power of proofs-of-possession: Securing multiparty signatures against rogue-key attacks. In: Naor, M. (ed.) Advances in Cryptology – EUROCRYPT 2007. Lecture Notes in Computer Science, vol. 4515, pp. 228–245. Springer, Heidelberg, Germany, Barcelona, Spain (May 20–24, 2007)
Schnorr, C.P.: Efficient signature generation by smart cards. Journal of Cryptology 4(3), 161–174 (1991)
Scott, M., Benger, N., Charlemagne, M., Perez, L.J.D., Kachisa, E.J.: Fast hashing to g2 on pairing-friendly curves. In: Shacham, H., Waters, B. (eds.) PAIRING 2009: 3rd International Conference on Pairing-based Cryptography. Lecture Notes in Computer Science, vol. 5671, pp. 102–113. Springer, Heidelberg, Germany, Palo Alto, CA, USA (Aug 12–14, 2009)
Representatives of the world's leading banks have repeatedly accused Bitcoin of being a financial pyramid, and its real value is zero. Let's see what caused such a negative critique and how the position of bankers has changed, as the developing of the crypto-currency. Why are bankers against bitcoin? The modern financial system operates according to the following principle. States issue national currencies and establish rules for their use. And banks are engaged in storing money and processing payments, but their main task is to monitor the implementation of the rules that the states have established. So, banks have a monopoly on money disposal. In fact, they use our money for their own purposes and force us to pay for these services. And there is no alternative to this order of things, to be exact, there wasn’t before crypto currency has appeared. The creator of Bitcoin Satoshi Nakamoto challenged the existing financial system by offering a payment network that is able to work without banks and politicians. Of course, bankers did not like this, because they do not want to lose the right to dispose of other people's money. Criticism of bitcoin The media like writing about how politicians and representatives of the current financial system criticize bitcoin. For example, let's take the head of the US financial holding JPMorgan Chase Jamie Daymon, who called bitcoin fraud and promised to fire any of his employee who is engaged in cryptotrading. Because such employees violate the policy of the bank and they are also "dumb" if they contacted with bitcoin. Also, the billionaire investor Howard Marks was unflattering about bitcoin. From his words, bitcoin has no real value and is an "unreasonable whim". To discredit Bitcoin, not only public statements were used, but also allegedly financial studies. So analysts of Morgan Stanley banking holding came to the conclusion that the real price of bitcoin was zero. https://preview.redd.it/bt8x23zxkdr11.jpg?width=1080&format=pjpg&auto=webp&s=50e4c397fafdcde50e4c9abb96cd1a919ad60c5f How has the ratio of banks to crypto-currencies changed? Gradually critics of bitcoin refuse their words, as Jamie Dimon and Howard Marks have already done. These rich and influential people apologized to the crypto community, but not so much their public statements as real actions are important. JPMorgan Chase has created a separate unit to develop a crypto-currency strategy and is working on its own blockchain platform. The details are kept secret, but the world's leading bank headed by Jamie Daymon clearly decided not to stay away from crypto market. Another well-known investment bank Goldman Sachs has invested tens of millions of dollars in the Circle payment service, which offers trading in crypto-currencies and recently bought a large crypto exchange instrument Poloniex. Asian banks also did not stay aloof. For example, Japan's leading financial company SBI Holding launched its own crypto-exchange VCTRADE. conclusions Banks are not going to give green light to bitcoin, so attacks and accusations will continue. And the fact that several prominent representatives of the banking sector have publicly changed their minds - still does not mean anything. At the same time, bankers see their benefits from being present at the crypto-currency market and take steps to find their own niche. This will attract new capital and customers, which will give the crypto-loans a significant impetus for growth. In such a situation, it is extremely important that the cryptosystem community maintain its independence and does not fall under the total control of banks. Bear in mind that the real fight is just beginning!
Drivechain RfD -- Follow Up | Paul Sztorc | Jun 10 2017
Paul Sztorc on Jun 10 2017: Hi everyone, It has been 3 weeks -- responses so far have been really helpful. People jumped right in, and identified unfinished or missing parts much faster than I thought they would (ie, ~two days). Very impressive. Currently, we are working on the sidechain side of blind merged mining. As you know, most of the Bitcoin cryptosystem is about finding the longest chain, and displaying information about this chain. CryptAxe is editing the sidechain code to handle reorganizations in a new way (an even bigger departure than Namecoin's, imho). I believe that I have responded to all the on-list objections that were raised. I will 1st summarize the on-list objections, and 2nd summarize the off-list discussion (focusing on three key themes). On-List Objection Summary In general, they were:
Peter complained about the resources required for the BMM 'crisis
audit', I pointed out that it is actually optional (and, therefore, free), and that it doesn't affect miners relative to each other, and that it can be done in an ultra-cheap semi-trusted way with high reliability.
Peter also asked about miner incentives, I replied that it is profit
maximizing to BMM sidechains, because the equation (Tx Fees - Zero Cost) is always positive.
ZmnSCPxj asked a long series of clarifying questions, and I responded.
Tier Nolan complained about my equivocation of the "Bitcoin: no block
subsidy" case and the "sidechain" case. He cites the asymmetry I point out below (in #2). I replied, and I give an answer below.
ZmnSCPxj pointed out an error in our OP Code (that we will fix).
ZmnSCPxj also asked a number of new questions, I responded. Then he
responded again, in general he seemed to raise many of the points addressed in #1 (below).
ZmnSCPxj wanted reorg proofs, to punish reorgers, but I pointed out
that if 51% can reorg, they can also filter out the reorg proof. We are at their mercy in all cases (for better or worse).
ZmnSCPxj also brought up the fact that a block size limit is necessary
for a fee market, I pointed out that this limit does not need to be imposed on miners by nodes...miners would be willing-and-able to self-impose such a limit, as it maximizes their revenues.
ZmnSCPxj also protested the need to soft fork in each individual
sidechain, I pointed out my strong disagreement ("Unrestrained smart contract execution will be the death of most of the interesting applications...[could] destabilize Bitcoin itself") and introduced my prion metaphor.
ZmnSCPxj and Tier Nolan both identified the problem solved by our
'ratchet' concept. I explained it to ZmnSCPxj in my reply. We had not coded it at the time, but there is code for it now . Tier proposed a rachet design, but I think ours is better (Tier did not find ours at all, because it is buried in obscure notes, because I didn't think anyone would make it this far so quickly).
Tier also advised us on how to fix the problem that ZmnSCPxj had
identified with our NOP earlier.
Tier also had a number of suggestions about the real-time negotiation
of the OP Bribe amount between nodes and miners. I'm afraid I mostly ignored these for now, as we aren't there yet.
Peter complained that ACKing the sidechain could be exploited for
political reasons, and I responded that in such a case, miners are free to simply avoid ACKing, or to acquiesce to political pressure. Neither affect the mainchain.
Peter complained that sidechains might provide some miners with the
opportunity to create a pretext to kick other miners off the network. I replied that it would not, and I also brought up the fact that my Bitcoin security model was indifferent to which people happened to be mining at any given time. I continue to believe that "mining centralization" does not have a useful definition.
Peter questioned whether or not sidechains would be useful. I stated
my belief that they would be useful, and linked to my site (drivechain.info/projects) which contains a number of sidechain use-cases, and cited my personal anecdotal experiences.
Peter wanted to involve miners "as little as possible", I pointed out
that I felt that I had indeed done this minimization. My view is that Peter felt erroneously that it was possible to involve miners less, because he neglected  that a 51% miner group is already involved maximally, as they can create most messages and filter any message, and  that there are cases where we need miners to filter out harmful interactions among multiple chains (just as they filter out harmful interactions among multiple txns [ie, "double spends"]). Peter has not yet responded to this rebuttal.
Peter also suggested client-side validation as "safer", and I pointed
out that sidechains+BMM is client-side validation. I asked Peter for CS-V code, so that we can compare the safety and other features.
Sergio reminded us of his version of drivechain. Sergio and I disagree
over the emphasis on frequency/speed of withdrawals. Also Sergio emphasizes a hybrid model, which does not interest me. If I missed any objections, I hope someone will point them out. Off-List / Three Points of Ongoing Confusion Off list, I have repeated the a similar conversation perhaps 6-10 times over the past week. There is a cluster of remaining objections which centers around three topics -- speed, theft, and antifragility. I will reply here, and add the answers to my FAQ (drivechain.info/faq).
This objection is voiced after I point out that side-to-main transfers ("withdrawals") will probably take a long time, for example 5 months each ( these are customizable parameters, and open for debate -- but if withdrawals are every x=3 months, and only x=1 withdrawal can make forward progress [on the mainchain] at a time, and only x=1 prospective withdrawal can be assembled [by the sidechain] at a time, then we can expect total withdrawal time to average 4.5 months [(.5)*3+3] ). The response is something like "won't such a system be too slow, and therefore unusable?". Imho, replies of this kind disregard the effect of atomic cross-chain swaps, which are instant. ( In addition, while side-to-main transfers are slow, main-to-side transfers are quite fast, x~=10 confirmations. I would go as far as to say that, just as the Lightning Network is enabled by SegWit and CSV, Drivechain is enabled by the atomic swaps and of Counterparty-like embedded consensus. ) Thanks to atomic swaps, someone can act as an investment banker or custodian, and purchase side:BTC at a (tiny, competitive discount) and then transfer those side-to-main at a minimal inconvenience (comparable to that of someone who buys a bank CD). Through market activities, the entire system becomes exactly as patient as its most-patient members. As icing on the cake, people who aren't planning on using their BTC anytime soon (ie "the patient") can even get a tiny investment yield, in return for providing this service.
This objection usually says something like "Aren't you worried that 51% [hashrate] will steal the coins, given that mining is so centralized these days?" The logic of drivechain is to take a known fact -- that miners do not steal from exchanges (by coordinating to doublespend deposits to those exchanges) -- and generalize it to a new situation -- that [hopefully] miners will not steal from sidechains (by coordinating to make 'invalid' withdrawals from them). My generalization is slightly problematic, because "a large mainchain reorg today" would hit the entire Bitcoin system and de-confirm all of the network's transactions, whereas a sidechain-theft would hit only a small portion of the system. This asymmetry is a problem because of the 1:1 peg, which is explicitly symmetrical -- the thief makes off coins that are worth just as much as those coins that he did not attack. The side:BTC are therefore relatively more vulnerable to theft, which harms the generalization. As I've just explained, to correct this relative deficiency, we add extra inconvenience for any sidechain thievery, which is in this case the long long withdrawal time of several months. (Which is also the main distinction between DC and extension blocks). I cannot realistically imagine an erroneous withdrawal persisting for several weeks, let alone several months. First, over a timeframe of this duration, there can be no pretense of 'we made an innocent mistake', nor one that 'it is too inconvenient for us to fix this problem'. This requires the attacker to be part of an explicitly malicious conspiracy. Even in the conspiring case, I do not understand how miners would coordinate the distribution of the eventual "theft" payout, ~3 months from now -- if new hashrate comes online between now and then, does it get a portion? -- if today's hashrate closes down, does it get a reduced portion? -- who decides? I don't think that an algorithm can decide (without creating a new mechanism, which -I believe- would have to be powered by ongoing sustainable theft [which is impossible]), because the withdrawal (ie the "theft") has to be initiated, with a known destination, before it accumulates 3 months worth of acknowledgement. Even if hashrate were controlled exclusively by one person, such a theft would obliterate the sidechain-tx-fee revenue from all sidechains, for a start. It would also greatly reduce the market price of [mainchain] BTC, I feel, as it ends the sidechain experiment more-or-less permanently. And even that di...[message truncated here by reddit bot]... original: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014559.html
What is CDY? Bitcoin Candy(CDY) is a new chain forked from Bitcoin Cash at the height of 512666. The original BCH holders will be compensated with 1000 CDY for every BCH held. New features will be added to the forked chain and we will explore anti-quantum attacks solution on this chain. What is anti-quantum attacks thing? In the past few years, D-Wave, IBM, Intel and other technological giants invest a lot of manpower and resources to increase research and development of quantum computers, i.e., Google have embedded D-Wave quantum computer into its cloud platform; the research team under Prof. Pan Weijian in Chinese University of Technology have made breakthrough achievements in quantum communications. Under such circumstance, the quantum age will no longer stay in science fiction conjecture (perhaps 5-10 years will come). The development of quantum computer will not only bring great changes to people's lives, but also pose a serious threat to traditional public key cryptography. Public key cryptosystems such as ECDSA and RSA will be solved in polynomial time by these quantum computers. So, virtual currency like Bitcoin which use ECDSA as a signature algorithm will become unsafe. To find post-quantum digital signature algorithm is of vital important. Our team has a deep post-quantum cryptography background and will conduct research and experiments on the CDY chain for practical public key signature algorithms in the post-quantum-era blockchain. Why forked from BCH, can BTC holders get free CDY? On August 1, 2017, Bitcoin community finally ended its years-long expansion war by splitting the original bitcoin into two chains, Bitcoin cash (BCH) and segwit Bitcoin (inheriting BTC ticker). We think BCH is more in line with Satoshi Nakamoto's vision of bitcoin "a peer-to-peer electronic cash system" and will have a brighter future. Only who hold BCH at height 512666(about January 13 ) can get free CDY at the rate of 1BCH : 1000CDY. What is the total supply of CDY? CDY will have a total supply of 21 billion, of which 1% will be pre-mined. How to claim my free CDY? To get free CDY, you need to hold Bitcoin Cash before height 512666 (about January 13). 1.If your BCH is stored in a wallet where you can control private key yourself, you will definitely get free CDY. Just waiting for the wallet developer to provide feature to claim. 2.If your BCH is stored in the exchange, the exchange will receive free CDY, please pay attention whether the exchange will provide CDY's collection function or not.If the exchange you store BCH does not support CDY, to avoid the loss of assets it is recommended to withdraw BCH to a wallet which you can contral private key or to exchanges that support CDY.
https://preview.redd.it/z7h35tpjdwf11.jpg?width=695&format=pjpg&auto=webp&s=060576eca306b629672a41dafe2175054642fa8f Representatives of the world's leading banks have repeatedly accused Bitcoin of being a financial pyramid, and its real value is zero. Let's see what caused such a negative critique and how the position of bankers has changed, as the developing of the crypto-currency. Why are bankers against bitcoin? The modern financial system operates according to the following principle. States issue national currencies and establish rules for their use. And banks are engaged in storing money and processing payments, but their main task is to monitor the implementation of the rules that the states have established. So, banks have a monopoly on money disposal. In fact, they use our money for their own purposes and force us to pay for these services. And there is no alternative to this order of things, to be exact, there wasn’t before crypto currency has appeared. The creator of Bitcoin Satoshi Nakamoto challenged the existing financial system by offering a payment network that is able to work without banks and politicians. Of course, bankers did not like this, because they do not want to lose the right to dispose of other people's money. Criticism of bitcoin The media like writing about how politicians and representatives of the current financial system criticize bitcoin. For example, let's take the head of the US financial holding JPMorgan Chase Jamie Daymon, who called bitcoin fraud and promised to fire any of his employee who is engaged in cryptotrading. Because such employees violate the policy of the bank and they are also "dumb" if they contacted with bitcoin. Also, the billionaire investor Howard Marks was unflattering about bitcoin. From his words, bitcoin has no real value and is an "unreasonable whim". To discredit Bitcoin, not only public statements were used, but also allegedly financial studies. So analysts of Morgan Stanley banking holding came to the conclusion that the real price of bitcoin was zero. How has the ratio of banks to crypto-currencies changed? Gradually critics of bitcoin refuse their words, as Jamie Dimon and Howard Marks have already done. These rich and influential people apologized to the crypto community, but not so much their public statements as real actions are important. JPMorgan Chase has created a separate unit to develop a crypto-currency strategy and is working on its own blockchain platform. The details are kept secret, but the world's leading bank headed by Jamie Daymon clearly decided not to stay away from crypto market. Another well-known investment bank Goldman Sachs has invested tens of millions of dollars in the Circle payment service, which offers trading in crypto-currencies and recently bought a large crypto exchange instrument Poloniex. Asian banks also did not stay aloof. For example, Japan's leading financial company SBI Holding launched its own crypto-exchange VCTRADE. conclusions Banks are not going to give green light to bitcoin, so attacks and accusations will continue. And the fact that several prominent representatives of the banking sector have publicly changed their minds - still does not mean anything. At the same time, bankers see their benefits from being present at the crypto-currency market and take steps to find their own niche. This will attract new capital and customers, which will give the crypto-loans a significant impetus for growth. In such a situation, it is extremely important that the cryptosystem community maintain its independence and does not fall under the total control of banks. Bear in mind that the real fight is just beginning!
Is anyone else freaked out by this whole blocksize debate? Does anyone else find themself often agreeing with *both* sides - depending on whichever argument you happen to be reading at the moment? And do we need some better algorithms and data structures?
Why do both sides of the debate seem “right” to me? I know, I know, a healthy debate is healthy and all - and maybe I'm just not used to the tumult and jostling which would be inevitable in a real live open major debate about something as vital as Bitcoin. And I really do agree with the starry-eyed idealists who say Bitcoin is vital. Imperfect as it may be, it certainly does seem to represent the first real chance we've had in the past few hundred years to try to steer our civilization and our planet away from the dead-ends and disasters which our government-issued debt-based currencies keep dragging us into. But this particular debate, about the blocksize, doesn't seem to be getting resolved at all. Pretty much every time I read one of the long-form major arguments contributed by Bitcoin "thinkers" who I've come to respect over the past few years, this weird thing happens: I usually end up finding myself nodding my head and agreeing with whatever particular piece I'm reading! But that should be impossible - because a lot of these people vehemently disagree! So how can both sides sound so convincing to me, simply depending on whichever piece I currently happen to be reading? Does anyone else feel this way? Or am I just a gullible idiot? Just Do It? When you first look at it or hear about it, increasing the size seems almost like a no-brainer: The "big-block" supporters say just increase the blocksize to 20 MB or 8 MB, or do some kind of scheduled or calculated regular increment which tries to take into account the capabilities of the infrastructure and the needs of the users. We do have the bandwidth and the memory to at least increase the blocksize now, they say - and we're probably gonna continue to have more bandwidth and memory in order to be able to keep increasing the blocksize for another couple decades - pretty much like everything else computer-based we've seen over the years (some of this stuff is called by names such as "Moore's Law"). On the other hand, whenever the "small-block" supporters warn about the utter catastrophe that a failed hard-fork would mean, I get totally freaked by their possible doomsday scenarios, which seem totally plausible and terrifying - so I end up feeling that the only way I'd want to go with a hard-fork would be if there was some pre-agreed "triggering" mechanism where the fork itself would only actually "switch on" and take effect provided that some "supermajority" of the network (of who? the miners? the full nodes?) had signaled (presumably via some kind of totally reliable p2p trustless software-based voting system?) that they do indeed "pre-agree" to actually adopt the pre-scheduled fork (and thereby avoid any possibility whatsoever of the precious blockchain somehow tragically splitting into two and pretty much killing this cryptocurrency off in its infancy). So in this "conservative" scenario, I'm talking about wanting at least 95% pre-adoption agreement - not the mere 75% which I recall some proposals call for, which seems like it could easily lead to a 75/25 blockchain split. But this time, with this long drawn-out blocksize debate, the core devs, and several other important voices who have become prominent opinion shapers over the past few years, can't seem to come to any real agreement on this. Weird split among the devs As far as I can see, there's this weird split: Gavin and Mike seem to be the only people among the devs who really want a major blocksize increase - and all the other devs seem to be vehemently against them. But then on the other hand, the users seem to be overwhelmingly in favor of a major increase. And there are meta-questions about governance, about about why this didn't come out as a BIP, and what the availability of Bitcoin XT means. And today or yesterday there was this really cool big-blockian exponential graph based on doubling the blocksize every two years for twenty years, reminding us of the pure mathematical fact that 210 is indeed about 1000 - but not really addressing any of the game-theoretic points raised by the small-blockians. So a lot of the users seem to like it, but when so few devs say anything positive about it, I worry: is this just yet more exponential chart porn? On the one hand, Gavin's and Mike's blocksize increase proposal initially seemed like a no-brainer to me. And on the other hand, all the other devs seem to be against them. Which is weird - not what I'd initially expected at all (but maybe I'm just a fool who's seduced by exponential chart porn?). Look, I don't mean to be rude to any of the core devs, and I don't want to come off like someone wearing a tinfoil hat - but it has to cross people's minds that the powers that be (the Fed and the other central banks and the governments that use their debt-issued money to run this world into a ditch) could very well be much more scared shitless than they're letting on. If we assume that the powers that be are using their usual playbook and tactics, then it could be worth looking at the book "Confessions of an Economic Hitman" by John Perkins, to get an idea of how they might try to attack Bitcoin. So, what I'm saying is, they do have a track record of sending in "experts" to try to derail projects and keep everyone enslaved to the Creature from Jekyll Island. I'm just saying. So, without getting ad hominem - let's just make sure that our ideas can really stand scrutiny on their own - as Nick Szabo says, we need to make sure there is "more computer science, less noise" in this debate. When Gavin Andresen first came out with the 20 MB thing - I sat back and tried to imagine if I could download 20 MB in 10 minutes (which seems to be one of the basic mathematical and technological constraints here - right?) I figured, "Yeah, I could download that" - even with my crappy internet connection. And I guess the telecoms might be nice enough to continue to double our bandwidth every two years for the next couple decades – if we ask them politely? On the other hand - I think we should be careful about entrusting the financial freedom of the world into the greedy hands of the telecoms companies - given all their shady shenanigans over the past few years in many countries. After decades of the MPAA and the FBI trying to chip away at BitTorrent, lately PirateBay has been hard to access. I would say it's quite likely that certain persons at institutions like JPMorgan and Goldman Sachs and the Fed might be very, very motivated to see Bitcoin fail - so we shouldn't be too sure about scaling plans which depend on the willingness of companies Verizon and AT&T to double our bandwith every two years. Maybe the real important hardware buildout challenge for a company like 21 (and its allies such as Qualcomm) to take on now would not be "a miner in every toaster" but rather "Google Fiber Download and Upload Speeds in every Country, including China". I think I've read all the major stuff on the blocksize debate from Gavin Andresen, Mike Hearn, Greg Maxwell, Peter Todd, Adam Back, and Jeff Garzick and several other major contributors - and, oddly enough, all their arguments seem reasonable - heck even Luke-Jr seems reasonable to me on the blocksize debate, and I always thought he was a whackjob overly influenced by superstition and numerology - and now today I'm reading the article by Bram Cohen - the inventor of BitTorrent - and I find myself agreeing with him too! I say to myself: What's going on with me? How can I possibly agree with all of these guys, if they all have such vehemently opposing viewpoints? I mean, think back to the glory days of a couple of years ago, when all we were hearing was how this amazing unprecedented grassroots innovation called Bitcoin was going to benefit everyone from all walks of life, all around the world:
wealthy individuals trying to preserve and transport their wealth across space and across time
iPhone and Android users who want to buy a latte on their smartphone at Starbucks
Venezuelans and Argentinians and Cypriots and Russian oligarchs and Greeks and anyone else whose state-backed currency sucks
unbanked Africans who will someday be texting around money via SMS messages on their cellphones
online content providers who will finally be able to get paid via micropayments
smart contracts and stock brokering and lawyering and land deeding and the refrigerator calling out to order more milk and distributed anonymous corporations (DACs) automatically negotiating and adjusting driverless taxicab fares in the Uber-future of the Internet of Things
...basically the entire human race transacting everything into the blockchain. (Although let me say that I think that people's focus on ideas like driverless cabs creating realtime fare markets based on supply and demand seems to be setting our sights a bit low as far as Bitcoin's abilities to correct the financial world's capital-misallocation problems which seem to have been made possible by infinite debt-based fiat. I would have hoped that a Bitcoin-based economy would solve much more noble, much more urgent capital-allocation problems than driverless taxicabs creating fare markets or refrigerators ordering milk on the internet of things. I was thinking more along the lines that Bitcoin would finally strangle dead-end debt-based deadly-toxic energy industries like fossil fuels and let profitable clean energy industries like Thorium LFTRs take over - but that's another topic. :=) Paradoxes in the blocksize debate Let me summarize the major paradoxes I see here: (1) Regarding the people (the majority of the core devs) who are against a blocksize increase: Well, the small-blocks arguments do seem kinda weird, and certainly not very "populist", in the sense that: When on earth have end-users ever heard of a computer technology whose capacity didn't grow pretty much exponentially year-on-year? All the cool new technology we've had - from hard drives to RAM to bandwidth - started out pathetically tiny and grew to unimaginably huge over the past few decades - and all our software has in turn gotten massively powerful and big and complex (sometimes bloated) to take advantage of the enormous new capacity available. But now suddenly, for the first time in the history of technology, we seem to have a majority of the devs, on a major p2p project - saying: "Let's not scale the system up. It could be dangerous. It might break the whole system (if the hard-fork fails)." I don't know, maybe I'm missing something here, maybe someone else could enlighten me, but I don't think I've ever seen this sort of thing happen in the last few decades of the history of technology - devs arguing against scaling up p2p technology to take advantage of expected growth in infrastructure capacity. (2) But... on the other hand... the dire warnings of the small-blockians about what could happen if a hard-fork were to fail - wow, they do seem really dire! And these guys are pretty much all heavyweight, experienced programmers and/or game theorists and/or p2p open-source project managers. I must say, that nearly all of the long-form arguments I've read - as well as many, many of the shorter comments I've read from many users in the threads, whose names I at least have come to more-or-less recognize over the past few months and years on reddit and bitcointalk - have been amazingly impressive in their ability to analyze all aspects of the lifecycle and management of open-source software projects, bringing up lots of serious points which I could never have come up with, and which seem to come from long experience with programming and project management - as well as dealing with economics and human nature (eg, greed - the game-theory stuff). So a lot of really smart and experienced people with major expertise in various areas ranging from programming to management to game theory to politics to economics have been making some serious, mature, compelling arguments. But, as I've been saying, the only problem to me is: in many of these cases, these arguments are vehemently in opposition to each other! So I find myself agreeing with pretty much all of them, one by one - which means the end result is just a giant contradiction. I mean, today we have Bram Cohen, the inventor of BitTorrent, arguing (quite cogently and convincingly to me), that it would be dangerous to increase the blocksize. And this seems to be a guy who would know a few things about scaling out a massive global p2p network - since the protocol which he invented, BitTorrent, is now apparently responsible for like a third of the traffic on the internet (and this despite the long-term concerted efforts of major evil players such as the MPAA and the FBI to shut the whole thing down). Was the BitTorrent analogy too "glib"? By the way - I would like to go on a slight tangent here and say that one of the main reasons why I felt so "comfortable" jumping on the Bitcoin train back a few years ago, when I first heard about it and got into it, was the whole rough analogy I saw with BitTorrent. I remembered the perhaps paradoxical fact that when a torrent is more popular (eg, a major movie release that just came out last week), then it actually becomes faster to download. More people want it, so more people have a few pieces of it, so more people are able to get it from each other. A kind of self-correcting economic feedback loop, where more demand directly leads to more supply. (BitTorrent manages to pull this off by essentially adding a certain structure to the file being shared, so that it's not simply like an append-only list of 1 MB blocks, but rather more like an random-access or indexed array of 1 MB chunks. Say you're downloading a film which is 700 MB. As soon as your "client" program has downloaded a single 1-MB chunk - say chunk #99 - your "client" program instantly turns into a "server" program as well - offering that chunk #99 to other clients. From my simplistic understanding, I believe the Bitcoin protocol does something similar, to provide a p2p architecture. Hence my - perhaps naïve - assumption that Bitcoin already had the right algorithms / architecture / data structure to scale.) The efficiency of the BitTorrent network seemed to jive with that "network law" (Metcalfe's Law?) about fax machines. This law states that the more fax machines there are, the more valuable the network of fax machines becomes. Or the value of the network grows on the order of the square of the number of nodes. This is in contrast with other technology like cars, where the more you have, the worse things get. The more cars there are, the more traffic jams you have, so things start going downhill. I guess this is because highway space is limited - after all, we can't pave over the entire countryside, and we never did get those flying cars we were promised, as David Graeber laments in a recent essay in The Baffler magazine :-) And regarding the "stress test" supposedly happening right now in the middle of this ongoing blocksize debate, I don't know what worries me more: the fact that it apparently is taking only $5,000 to do a simple kind of DoS on the blockchain - or the fact that there are a few rumors swirling around saying that the unknown company doing the stress test shares the same physical mailing address with a "scam" company? Or maybe we should just be worried that so much of this debate is happening on a handful of forums which are controlled by some guy named theymos who's already engaged in some pretty "contentious" or "controversial" behavior like blowing a million dollars on writing forum software (I guess he never heard that reddit.com software is open-source)? So I worry that the great promise of "decentralization" might be more fragile than we originally thought. Scaling Anyways, back to Metcalfe's Law: with virtual stuff, like torrents and fax machines, the more the merrier. The more people downloading a given movie, the faster it arrives - and the more people own fax machines, the more valuable the overall fax network. So I kindof (naïvely?) assumed that Bitcoin, being "virtual" and p2p, would somehow scale up the same magical way BitTorrrent did. I just figured that more people using it would somehow automatically make it stronger and faster. But now a lot of devs have started talking in terms of the old "scarcity" paradigm, talking about blockspace being a "scarce resource" and talking about "fee markets" - which seems kinda scary, and antithetical to much of the earlier rhetoric we heard about Bitcoin (the stuff about supporting our favorite creators with micropayments, and the stuff about Africans using SMS to send around payments). Look, when some asshole is in line in front of you at the cash register and he's holding up the line so they can run his credit card to buy a bag of Cheeto's, we tend to get pissed off at the guy - clogging up our expensive global electronic payment infrastructure to make a two-dollar purchase. And that's on a fairly efficient centralized system - and presumably after a year or so, VISA and the guy's bank can delete or compress the transaction in their SQL databases. Now, correct me if I'm wrong, but if some guy buys a coffee on the blockchain, or if somebody pays an online artist $1.99 for their work - then that transaction, a few bytes or so, has to live on the blockchain forever? Or is there some "pruning" thing that gets rid of it after a while? And this could lead to another question: Viewed from the perspective of double-entry bookkeeping, is the blockchain "world-wide ledger" more like the "balance sheet" part of accounting, i.e. a snapshot showing current assets and liabilities? Or is it more like the "cash flow" part of accounting, i.e. a journal showing historical revenues and expenses? When I think of thousands of machines around the globe having to lug around multiple identical copies of a multi-gigabyte file containing some asshole's coffee purchase forever and ever... I feel like I'm ideologically drifting in one direction (where I'd end up also being against really cool stuff like online micropayments and Africans banking via SMS)... so I don't want to go there. But on the other hand, when really experienced and battle-tested veterans with major experience in the world of open-souce programming and project management (the "small-blockians") warn of the catastrophic consequences of a possible failed hard-fork, I get freaked out and I wonder if Bitcoin really was destined to be a settlement layer for big transactions. Could the original programmer(s) possibly weigh in? And I don't mean to appeal to authority - but heck, where the hell is Satoshi Nakamoto in all this? I do understand that he/she/they would want to maintain absolute anonymity - but on the other hand, I assume SN wants Bitcoin to succeed (both for the future of humanity - or at least for all the bitcoins SN allegedly holds :-) - and I understand there is a way that SN can cryptographically sign a message - and I understand that as the original developer of Bitcoin, SN had some very specific opinions about the blocksize... So I'm kinda wondering of Satoshi could weigh in from time to time. Just to help out a bit. I'm not saying "Show us a sign" like a deity or something - but damn it sure would be fascinating and possibly very helpful if Satoshi gave us his/hetheir 2 satoshis worth at this really confusing juncture. Are we using our capacity wisely? I'm not a programming or game-theory whiz, I'm just a casual user who has tried to keep up with technology over the years. It just seems weird to me that here we have this massive supercomputer (500 times more powerful than the all the supercomputers in the world combined) doing fairly straightforward "embarassingly parallel" number-crunching operations to secure a p2p world-wide ledger called the blockchain to keep track of a measly 2.1 quadrillion tokens spread out among a few billion addresses - and a couple of years ago you had people like Rick Falkvinge saying the blockchain would someday be supporting multi-million-dollar letters of credit for international trade and you had people like Andreas Antonopoulos saying the blockchain would someday allow billions of "unbanked" people to send remittances around the village or around the world dirt-cheap - and now suddenly in June 2015 we're talking about blockspace as a "scarce resource" and talking about "fee markets" and partially centralized, corporate-sponsored "Level 2" vaporware like Lightning Network and some mysterious company is "stess testing" or "DoS-ing" the system by throwing away a measly $5,000 and suddenly it sounds like the whole system could eventually head right back into PayPal and Western Union territory again, in terms of expensive fees. When I got into Bitcoin, I really was heavily influenced by vague analogies with BitTorrent: I figured everyone would just have tiny little like utorrent-type program running on their machine (ie, Bitcoin-QT or Armory or Mycelium etc.). I figured that just like anyone can host a their own blog or webserver, anyone would be able to host their own bank. Yeah, Google and and Mozilla and Twitter and Facebook and WhatsApp did come along and build stuff on top of TCP/IP, so I did expect a bunch of companies to build layers on top of the Bitcoin protocol as well. But I still figured the basic unit of bitcoin client software powering the overall system would be small and personal and affordable and p2p - like a bittorrent client - or at the most, like a cheap server hosting a blog or email server. And I figured there would be a way at the software level, at the architecture level, at the algorithmic level, at the data structure level - to let the thing scale - if not infinitely, at least fairly massively and gracefully - the same way the BitTorrent network has. Of course, I do also understand that with BitTorrent, you're sharing a read-only object (eg, a movie) - whereas with Bitcoin, you're achieving distributed trustless consensus and appending it to a write-only (or append-only) database. So I do understand that the problem which BitTorrent solves is much simpler than the problem which Bitcoin sets out to solve. But still, it seems that there's got to be a way to make this thing scale. It's p2p and it's got 500 times more computing power than all the supercomputers in the world combined - and so many brilliant and motivated and inspired people want this thing to succeed! And Bitcoin could be our civilization's last chance to steer away from the oncoming debt-based ditch of disaster we seem to be driving into! It just seems that Bitcoin has got to be able to scale somehow - and all these smart people working together should be able to come up with a solution which pretty much everyone can agree - in advance - will work. Right? Right? A (probably irrelevant) tangent on algorithms and architecture and data structures I'll finally weigh with my personal perspective - although I might be biased due to my background (which is more on the theoretical side of computer science). My own modest - or perhaps radical - suggestion would be to ask whether we're really looking at all the best possible algorithms and architectures and data structures out there. From this perspective, I sometimes worry that the overwhelming majority of the great minds working on the programming and game-theory stuff might come from a rather specific, shall we say "von Neumann" or "procedural" or "imperative" school of programming (ie, C and Python and Java programmers). It seems strange to me that such a cutting-edge and important computer project would have so little participation from the great minds at the other end of the spectrum of programming paradigms - namely, the "functional" and "declarative" and "algebraic" (and co-algebraic!) worlds. For example, I was struck in particular by statements I've seen here and there (which seemed rather hubristic or lackadaisical to me - for something as important as Bitcoin), that the specification of Bitcoin and the blockchain doesn't really exist in any form other than the reference implementation(s) (in procedural languages such as C or Python?). Curry-Howard anyone? I mean, many computer scientists are aware of the Curry-Howard isomorophism, which basically says that the relationship between a theorem and its proof is equivalent to the relationship between a specification and its implementation. In other words, there is a long tradition in mathematics (and in computer programming) of:
separating the compact (and easy-to-check) statement of a theorem from the messy (and hard-to-check) details of its proof(s);
separating the specification of a system from its implementation(s); and
being able to prove that an implementation does indeed satisfy its specification.
And it's not exactly "turtles all the way down" either: a specification is generally simple and compact enough that a good programmer can usually simply visually inspect it to determine if it is indeed "correct" - something which is very difficult, if not impossible, to do with a program written in a procedural, implementation-oriented language such as C or Python or Java. So I worry that we've got this tradition, from the open-source github C/Java programming tradition, of never actually writing our "specification", and only writing the "implementation". In mission-critical military-grade programming projects (which often use languages like Ada or Maude) this is simply not allowed. It would seem that a project as mission-critical as Bitcoin - which could literally be crucial for humanity's continued survival - should also use this kind of military-grade software development approach. And I'm not saying rewrite the implementations in these kind of theoretical languages. But it might be helpful if the C/Python/Java programmers in the Bitcoin imperative programming world could build some bridges to the Maude/Haskell/ML programmers of the functional and algebraic programming worlds to see if any kind of useful cross-pollination might take place - between specifications and implementations. For example, the JavaFAN formal analyzer for multi-threaded Java programs (developed using tools based on the Maude language) was applied to the Remote Agent AI program aboard NASA's Deep Space 1 shuttle, written in Java - and it took only a few minutes using formal mathematical reasoning to detect a potential deadlock which would have occurred years later during the space mission when the damn spacecraft was already way out around Pluto. And "the Maude-NRL (Naval Research Laboratory) Protocol Analyzer (Maude-NPA) is a tool used to provide security proofs of cryptographic protocols and to search for protocol flaws and cryptosystem attacks." These are open-source formal reasoning tools developed by DARPA and used by NASA and the US Navy to ensure that program implementations satisfy their specifications. It would be great if some of the people involved in these kinds of projects could contribute to help ensure the security and scalability of Bitcoin. But there is a wide abyss between the kinds of programmers who use languages like Maude and the kinds of programmers who use languages like C/Python/Java - and it can be really hard to get the two worlds to meet. There is a bit of rapprochement between these language communities in languages which might be considered as being somewhere in the middle, such as Haskell and ML. I just worry that Bitcoin might be turning into being an exclusively C/Python/Java project (with the algorithms and practitioners traditionally of that community), when it could be more advantageous if it also had some people from the functional and algebraic-specification and program-verification community involved as well. The thing is, though: the theoretical practitioners are big on "semantics" - I've heard them say stuff like "Yes but a C / C++ program has no easily identifiable semantics". So to get them involved, you really have to first be able to talk about what your program does (specification) - before proceeding to describe how it does it (implementation). And writing high-level specifications is typically very hard using the syntax and semantics of languages like C and Java and Python - whereas specs are fairly easy to write in Maude - and not only that, they're executable, and you state and verify properties about them - which provides for the kind of debate Nick Szabo was advocating ("more computer science, less noise"). Imagine if we had an executable algebraic specification of Bitcoin in Maude, where we could formally reason about and verify certain crucial game-theoretical properties - rather than merely hand-waving and arguing and deploying and praying. And so in the theoretical programming community you've got major research on various logics such as Girard's Linear Logic (which is resource-conscious) and Bruni and Montanari's Tile Logic (which enables "pasting" bigger systems together from smaller ones in space and time), and executable algebraic specification languages such as Meseguer's Maude (which would be perfect for game theory modeling, with its functional modules for specifying the deterministic parts of systems and its system modules for specifiying non-deterministic parts of systems, and its parameterized skeletons for sketching out the typical architectures of mobile systems, and its formal reasoning and verification tools and libraries which have been specifically applied to testing and breaking - and fixing - cryptographic protocols). And somewhat closer to the practical hands-on world, you've got stuff like Google's MapReduce and lots of Big Data database languages developed by Google as well. And yet here we are with a mempool growing dangerously big for RAM on a single machine, and a 20-GB append-only list as our database - and not much debate on practical results from Google's Big Data databases. (And by the way: maybe I'm totally ignorant for asking this, but I'll ask anyways: why the hell does the mempool have to stay in RAM? Couldn't it work just as well if it were stored temporarily on the hard drive?) And you've got CalvinDB out of Yale which apparently provides an ACID layer on top of a massively distributed database. Look, I'm just an armchair follower cheering on these projects. I can barely manage to write a query in SQL, or read through a C or Python or Java program. But I would argue two points here: (1) these languages may be too low-level and "non-formal" for writing and modeling and formally reasoning about and proving properties of mission-critical specifications - and (2) there seem to be some Big Data tools already deployed by institutions such as Google and Yale which support global petabyte-size databases on commodity boxes with nice properties such as near-real-time and ACID - and I sometimes worry that the "core devs" might be failing to review the literature (and reach out to fellow programmers) out there to see if there might be some formal program-verification and practical Big Data tools out there which could be applied to coming up with rock-solid, 100% consensus proposals to handle an issue such as blocksize scaling, which seems to have become much more intractable than many people might have expected. I mean, the protocol solved the hard stuff: the elliptical-curve stuff and the Byzantine General stuff. How the heck can we be falling down on the comparatively "easier" stuff - like scaling the blocksize? It just seems like defeatism to say "Well, the blockchain is already 20-30 GB and it's gonna be 20-30 TB ten years from now - and we need 10 Mbs bandwidth now and 10,000 Mbs bandwidth 20 years from - assuming the evil Verizon and AT&T actually give us that - so let's just become a settlement platform and give up on buying coffee or banking the unbanked or doing micropayments, and let's push all that stuff into some corporate-controlled vaporware without even a whitepaper yet." So you've got Peter Todd doing some possibly brilliant theorizing and extrapolating on the idea of "treechains" - there is a Let's Talk Bitcoin podcast from about a year ago where he sketches the rough outlines of this idea out in a very inspiring, high-level way - although the specifics have yet to be hammered out. And we've got Blockstream also doing some hopeful hand-waving about the Lightning Network. Things like Peter Todd's treechains - which may be similar to the spark in some devs' eyes called Lightning Network - are examples of the kind of algorithm or architecture which might manage to harness the massive computing power of miners and nodes in such a way that certain kinds of massive and graceful scaling become possible. It just seems like a kindof tiny dev community working on this stuff. Being a C or Python or Java programmer should not be a pre-req to being able to help contribute to the specification (and formal reasoning and program verification) for Bitcoin and the blockchain. XML and UML are crap modeling and specification languages, and C and Java and Python are even worse (as specification languages - although as implementation languages, they are of course fine). But there are serious modeling and specification languages out there, and they could be very helpful at times like this - where what we're dealing with is questions of modeling and specification (ie, "needs and requirements"). One just doesn't often see the practical, hands-on world of open-source github implementation-level programmers and the academic, theoretical world of specification-level programmers meeting very often. I wish there were some way to get these two worlds to collaborate on Bitcoin. Maybe a good first step to reach out to the theoretical people would be to provide a modular executable algebraic specification of the Bitcoin protocol in a recognized, military/NASA-grade specification language such as Maude - because that's something the theoretical community can actually wrap their heads around, whereas it's very hard to get them to pay attention to something written only as a C / Python / Java implementation (without an accompanying specification in a formal language). They can't check whether the program does what it's supposed to do - if you don't provide a formal mathematical definition of what the program is supposed to do. Specification : Implementation :: Theorem : Proof You have to remember: the theoretical community is very aware of the Curry-Howard isomorphism. Just like it would be hard to get a mathematician's attention by merely showing them a proof without telling also telling them what theorem the proof is proving - by the same token, it's hard to get the attention of a theoretical computer scientist by merely showing them an implementation without showing them the specification that it implements. Bitcoin is currently confronted with a mathematical or "computer science" problem: how to secure the network while getting high enough transactional throughput, while staying within the limited RAM, bandwidth and hard drive space limitations of current and future infrastructure. The problem only becomes a political and economic problem if we give up on trying to solve it as a mathematical and "theoretical computer science" problem. There should be a plethora of whitepapers out now proposing algorithmic solutions to these scaling issues. Remember, all we have to do is apply the Byzantine General consensus-reaching procedure to a worldwide database which shuffles 2.1 quadrillion tokens among a few billion addresses. The 21 company has emphatically pointed out that racing to compute a hash to add a block is an "embarrassingly parallel" problem - very easy to decompose among cheap, fault-prone, commodity boxes, and recompose into an overall solution - along the lines of Google's highly successful MapReduce. I guess what I'm really saying is (and I don't mean to be rude here), is that C and Python and Java programmers might not be the best qualified people to develop and formally prove the correctness of (note I do not say: "test", I say "formally prove the correctness of") these kinds of algorithms. I really believe in the importance of getting the algorithms and architectures right - look at Google Search itself, it uses some pretty brilliant algorithms and architectures (eg, MapReduce, Paxos) which enable it to achieve amazing performance - on pretty crappy commodity hardware. And look at BitTorrent, which is truly p2p, where more demand leads to more supply. So, in this vein, I will close this lengthy rant with an oddly specific link - which may or may not be able to make some interesting contributions to finding suitable algorithms, architectures and data structures which might help Bitcoin scale massively. I have no idea if this link could be helpful - but given the near-total lack of people from the Haskell and ML and functional worlds in these Bitcoin specification debates, I thought I'd be remiss if I didn't throw this out - just in case there might be something here which could help us channel the massive computing power of the Bitcoin network in such a way as to enable us simply sidestep this kind of desperate debate where both sides seem right because the other side seems wrong. https://personal.cis.strath.ac.uk/neil.ghani/papers/ghani-calco07 The above paper is about "higher dimensional trees". It uses a bit of category theory (not a whole lot) and a bit of Haskell (again not a lot - just a simple data structure called a Rose tree, which has a wikipedia page) to develop a very expressive and efficient data structure which generalizes from lists to trees to higher dimensions. I have no idea if this kind of data structure could be applicable to the current scaling mess we apparently are getting bogged down in - I don't have the game-theory skills to figure it out. I just thought that since the blockchain is like a list, and since there are some tree-like structures which have been grafted on for efficiency (eg Merkle trees) and since many of the futuristic scaling proposals seem to also involve generalizing from list-like structures (eg, the blockchain) to tree-like structures (eg, side-chains and tree-chains)... well, who knows, there might be some nugget of algorithmic or architectural or data-structure inspiration there. So... TL;DR: (1) I'm freaked out that this blocksize debate has splintered the community so badly and dragged on so long, with no resolution in sight, and both sides seeming so right (because the other side seems so wrong). (2) I think Bitcoin could gain immensely by using high-level formal, algebraic and co-algebraic program specification and verification languages (such as Maude including Maude-NPA, Mobile Maude parameterized skeletons, etc.) to specify (and possibly also, to some degree, verify) what Bitcoin does - before translating to low-level implementation languages such as C and Python and Java saying how Bitcoin does it. This would help to communicate and reason about programs with much more mathematical certitude - and possibly obviate the need for many political and economic tradeoffs which currently seem dismally inevitable - and possibly widen the collaboration on this project. (3) I wonder if there are some Big Data approaches out there (eg, along the lines of Google's MapReduce and BigTable, or Yale's CalvinDB), which could be implemented to allow Bitcoin to scale massively and painlessly - and to satisfy all stakeholders, ranging from millionaires to micropayments, coffee drinkers to the great "unbanked".
READ ALSO: Blaming Bitcoin for Own Failure? UN Official Says Cryptos Make Child Abuse Hard to Probe. Ryuk . Ryuk became famous early in the first quarter, when its outbreak made newspaper headlines in the United States, which forced McAfee to probe into this ransomware family. After thorough investigation, the research deduced that the Ryuk attacks might not necessarily be backed by a nation ... A cryptosystem is an implementation of cryptographic techniques and their accompanying infrastructure to provide information security services. A cryptosystem is also referred to as a cipher system. Let us discuss a simple model of a cryptosystem that provides confidentiality to the information being transmitted. This Comparisons of Bitcoin Cryptosystem with Other Common Internet Transaction Systems by AHP Technique Article (PDF Available) in Journal of Information and Organizational Sciences 41(1):69-87 ... A cryptosystem is a pair of algorithms: one for the encryption of data and another for decryption. Often these algorithms use a unique key which should be kept secret, in which case the process for generating and sharing the key is also considered part of the cryptosystem. Modern cryptography is essential to the digital world we live in and has grown to be quite complex. QC attacks. The most dangerous attack by quantum computers is against public-key cryptography. On traditional computers, it takes on the order of 2 128 basic operations to get the Bitcoin private key associated with a Bitcoin public key. This number is so massively large that any attack using traditional computers is completely impractical. However, it is known for sure that it would take a ...
SYNHROS CRYPTOSYSTEM РЕКЛАМИРУЙ И ЗАРАБАТЫВАЙ ETHEREUM
Bitcoin Private and Public Keys Explained Simply - Duration: 12:23. Cloud Money 1,013 views. 12:23. Asymmetric key Cryptography with example ... NEW CHANNEL: https://www.youtube.com/channel/UCH9H... I also play video games! https://www.youtube.com/channel/UCvXj... -----... Bitcoin Live - BCH Halving -Tom Crown Tom Crown 230 watching Live now CRIISP Virtual Business Summit 2020 - Surviving and Thriving in Extraordinary Times Wholesale Investor 149 watching Who are the true enemies of Bitcoin & Is 2020 the year they launch an attack? Governments, Regulators & the old financial system are beginning to crumble. Thus centralized government digital ... Регистрация https://bit.ly/31if1Bc ВНИМАНИЕ!!! ПРЕДСТАРТ НОВОГО ТОПОВОГО ПРОЕКТА #SYNHROS !!! ВХОД ВСЕГО 0.01 ETH Одним из ...