Bitcoin Script: An Introduction For Beginners - Komodo

Urbit meetup in North Texas

Hi everybody, I'm holding a meetup in the DFW area for people interested in Urbit next month. If you're interested in the project or want to learn more about it, come hang out! Details are at the end of the post. I've got the blessing of u/ZorbaTHut to post this here contingent on explaining why Urbit is interesting, both in general and for this audience, so I'll give you a brief outline of the project if you're not familiar, and answer questions you may have once I'm home from work on Monday (though I encourage anybody else who'd like to to chime in until then -- I have to go to bed soon.)

What is Urbit?

Urbit is an interenet decentralization project, and a full networked computing stack from the ground up. Urbit's ultimate goal is to build a new internet on top of the old one, that is architecturally designed to avoid the need for centralized services by allowing individuals to run and program robust personal servers that are simple to manage. When Urbit conquers the world, your digital identity will be something you personally permanently own as a cryptographic key, not an line in a corporation's database; Facebook and Twitter will be protocols -- encrypted traffic and data shared directly between you and your friends & family, with no middlemen spying on you; your apps, social software and anything you program will have secure cryptocurrency payment mechanisms as a system call, payed out of a wallet on a device you fully control; and you will tangibly own and control your computer and the networked software you use on it.
As I said, Urbit is a stack; at its core is Nock, a minimal, turing-complete function. Nock is built out into a deterministic operating system, Arvo, with its own functional programming language. For now, Arvo runs as a process, with a custom VM/interpreter on *nix machines. Your Arvo instance talks to other instances over a native, encrypted peer-to-peer network, though it can interface with the normal internet as well. Urbit's identity management system is called Azimuth, a public key infrastructure built on Ethereum. You own proof of your Urbit instance's identity as a token in the same way you own your Bitcoin wallet.
Because the peer-to-peer network is built into Arvo, you get it 'for free' with any software you write or run on it. You run your own personal server, and run all the software you use to communicate with the world yourself. Because all of your services are running on computer you control using a single secure identity system, you can think of what it aspires to like a decentralized, cypherpunk version of WeChat -- a programmable, secure platform for everything you want to do with your computer in one place, without the downsides of other people running your software.

Why is it interesting?

Urbit is extremely ambitious and pretty strange. Why throw out the entire stack we've spent half a century building? Because it's a giant ball of mud -- millions of lines of code in the Linux kernel alone, with all the attendant security issues and complexity. You can run a personal server today if you're technically sophisticated; spin up a VPS, install all the software you need, configure everything and keep it secure. It's doable, but it sucks, and your mom can't do it. Urbit is designed from the beginning to avoid the pitfalls that led to cascading system complexity. Nock has 12 opcodes, and Arvo is somewhere in the neighborhood of 30,000 lines of code. The core pieces of Urbit are also ticking towards being 'frozen' -- reaching a state where they can no longer be changed, in order to ensure that they remain absolutely minimal. The point of all of this is to make a diamond-hard, unchanging core that a single person can actually understand in its entirety, ensure the security of the architecture, prevent insane dependency hell and leaky abstractions from overgrowing it, and allow for software you write today to run in a century. It also aims to be simple enough that a normal person can pay a commodity provider $5/mo (or something), log into their Urbit on their devices, and control it as easily as their phone.
Urbit's network also has a routing hierarchy that is important to understand; while the total address space is 128-bit, the addresses are partitioned into different classes. 8-bit and 16-bit addresses act as network infrastructure, while human instances use 32-bit addresses. To use the network, you must be sponsored by the 16-bit node 'above' you -- which is to say 'be on good terms'. If you aren't on good terms, that sponsorship can be terminated, but that goes both ways -- if you don't like your sponsor, you can exit and choose another. Because 32-bit addresses are finite, they're scarce and have value, which disincentivizes spam and abuse. To be clear, the sponsor nodes only sign/deliver software updates, and perform peer discovery and NAT traversal; your connections with other people are direct and encrypted. Because there are many sponsor nodes, you can return to the network if you're kicked off unfairly. In the long term, this also allows for graceful political fragmentation of the network if necessary.
The world created by Urbit is a world where individuals control their own data and digital communities live according to their mores. It's an internet that isn't funded by mass automated surveillance and ad companies that know your health problems. It's also the internet as a frontier like it once was, at least until this one is settled. Apologies if this comes off a little true-believer-y, but this project is something I'm genuinely excited about.

For TheMotte

The world that Urbit aims to build is one not dissimilar from Scott's archipelago communism -- one of voluntaristic relations and communities, and exit in the face of conflict & coercion. It's technical infrastructure to move the internet away from the chokepoints of the major social media platforms and the concentration of political power that comes with centralized services. The seismic shifts affecting our institutions and society caused by the internet in the last decade have been commented on at length here and elsewhere, but as BTO said, you ain't seen nothin' yet. I suspect many people with a libertarian or anti-authoritarian bent would appreciate the principle of individual sovereignty over their computing and data. The project is also something I've discussed a few times with others on here, so I know there's some curiosity about it.
The original developer of Urbit is also rather well known online, especially around here. Yarvin is a pretty controversial figure, but he departed the project in early 2019.

Meetup

There's a lot more that I haven't mentioned, but I hope this has piqued your interest. If you're in DFW, you can find details of the first meetup here. There will be free pizza and a presentation about Urbit, help installing & using it (Mac & Linux only for now), as well as the opportunity to socialize. All are welcome! Feel free to bring a friend.
If you're not in North Texas but are interested, there are also other regional meetups all over the world coming up soon.

Further reading:

submitted by p3on to TheMotte [link] [comments]

For devs and advanced users that are still in the dark: Read this to get redpilled about why Bitcoin (SV) is the real Bitcoin

This post by cryptorebel is a great intro for newbies. Here is a continuation for a technical audience. I'll be making edits for readability and maybe even add more content.
The short explanation of why BSV is the real Bitcoin is that it implements the original L1 scripting language, and removes hacks like p2sh. It also removes the block size limit, and yes that leads to a small number of huge nodes. It might not be the system you wanted. Nodes are miners.
The key thing to understand about the UTXO architecture is that it is maximally "sharded" by default. Logically dependent transactions may require linear span to construct, but they can be validated in sublinear span (actually polylogarithmic expected span). Constructing dependent transactions happens out-of-band in any case.
The fact that transactions in a block are merkelized is an obvious sign that Bitcoin was designed for big blocks. But merkle trees are only half the story. UTXOs are essentially hash-addressed stateful continuation snapshots which can also be "merged" (validated) in a tree.
I won't even bother talking about how broken Lightning Network is. Of all the L2 scaling solutions that could have been used with small block sizes, it's almost unbelievable how many bad choices they've made. We should be kind to them and assume it was deliberate sabotage rather than insulting their intelligence.
Segwit is also outside the scope of this post.
However I will briefly hate on p2sh. Imagine seeing a stunted L1 script language, and deciding that the best way to implement multisigs was a soft-fork patch in the form of p2sh. If the intent was truly backwards-compatability with old clients, then by that logic all segwit and p2sh addresses are supposed to only be protected by transient rules outside of the protocol. Explain that to your custody clients.
As far as Bitcoin Cash goes, I was in the camp of "there's still time to save BCH" until not too long ago. Unfortunately the galaxy brains behind BCH have doubled down on their mistakes. Again, it is kinder to assume deliberate sabotage. (As an aside, the fact that they didn't embrace the name "bcash" when it was used to attack them shows how unprepared they are when the real psyops start to hit. Or, again, that the saboteurs controlled the entire back-and-forth.)
The one useful thing that came out of BCH is some progress on L1 apps based on covenants, but the issue is that they are not taking care to ensure every change maintains the asymptotic validation complexity of bitcoin's UTXO.
Besides that, The BCH devs missed something big. So did I.
It's possible to load the entire transaction onto the stack without adding any new opcodes. Read this post for a quick intro on how transaction meta-evaluation leads to stateful smart contract capabilities. Note that it was written before I understood how it was possible in Bitcoin, but the concept is the same. I've switching to developing a language that abstracts this behavior and compiles to bitcoin's L1. (Please don't "told you so" at me if you just blindly trusted nChain but still can't explain how it's done.)
It is true that this does not allow exactly the same class of L1 applications as Ethereum. It only allows those than can be made parallel, those that can delegate synchronization to "userspace". It forces you to be scalable, to process bottlenecks out-of-band at a per-application level.
Now, some of the more diehard supporters might say that Satoshi knew this was possible and meant for it to be this way, but honestly I don't believe that. nChain says they discovered the technique 'several years ago'. OP_PUSH_TX would have been a very simple opcode to include, and it does not change any aspect of validation in any way. The entire transaction is already in the L1 evaluation context for the purpose of checksig, it truly changes nothing.
But here's the thing: it doesn't matter if this was a happy accident. What matters is that it works. It is far more important to keep the continuity of the original protocol spec than to keep making optimizations at the protocol level. In a concatenative language like bitcoin script, optimized clients can recognize "checksig trick phrases" regardless of their location in the script, and treat them like a simple opcode. Script size is not a constraint when you allow the protocol to scale as designed. Think of it as precompiles in EVM.
Now let's address Ethereum. V. Buterin recently wrote a great piece about the concept of credible neutrality. The only way for a blockchain system to achieve credible neutrality and long-term decentralization of power is to lock down the protocol rules. The thing that caused Ethereum to succeed was the yellow paper. Ethereum has outperformed every other smart contract platform because the EVM has clear semantics with many implementations, so people can invest time and resources into applications built on it. The EVM is apolitical, the EVM spec (fixed at any particular version) is truly decentralized. Team Ethereum can plausibly maintain credibility and neutrality as long as they make progress towards the "Serenity" vision they outlined years ago. Unfortunately they have already placed themselves in a precarious position by picking and choosing which catastrophes they intervene on at the protocol level.
But those are social and political issues. The major technical issue facing the EVM is that it is inherently sequential. It does not have the key property that transactions that occur "later" in the block can be validated before the transactions they depend on are validated. Sharding will hit a wall faster than you can say "O(n/64) is O(n)". Ethereum will get a lot of mileage out of L2, but the fundamental overhead of synchronization in L1 will never go away. The best case scaling scenario for ETH is an L2 system with sublinear validation properties like UTXO. If the economic activity on that L2 system grows larger than that of the L1 chain, the system loses key security properties. Ethereum is sequential by default with parallelism enabled by L2, while Bitcoin is parallel by default with synchronization forced into L2.
Finally, what about CSW? I expect soon we will see a lot of people shouting, "it doesn't matter who Satoshi is!", and they're right. The blockchain doesn't care if CSW is Satoshi or not. It really seems like many people's mental model is "Bitcoin (BSV) scales and has smart contracts if CSW==Satoshi". Sorry, but UTXO scales either way. The checksig trick works either way.
Coin Woke.
submitted by -mr-word- to bitcoincashSV [link] [comments]

You can call you a Bitcoiner if you know/can explain these terms...

03/Jan/2009
10 Minutes
10,000 BTC Pizza
2016 Blocks
21 Million
210,000 Blocks
51% Attack
Address
Altcoin
Antonopoulos
Asic
Asic Boost
Base58
Batching
Bech32
Bit
Bitcoin Cash
Bitcoin Improvement Proposal (BIP)
Bitcoin SV
Bitmain
Block
Block height
Block reward
Blockchain
Blockexplorer
Bloom Filter
Brain Wallet
Buidl
Change Address
Child pays for parent (CPFP)
Coinbase (not the exchange)
CoinJoin
Coinmarketcap (CMC)
Colored Coin
Confirmation
Consensus
Custodial Wallet
Craig Wright
David Kleinman
Difficulty
Difficulty adjustment
Difficulty Target
Dogecoin
Dorian Nakamoto
Double spend
Elliptic Curve Digital Signature Algorithm (ECDSA)
Ethereum
Faketoshi
Fork
Full Node
Gavin Andresen
Genesis Block
Getting goxed
Halving
Hard Fork
Hardware Wallet
Hash
Hashing
Hierarchical Deterministic (HD) Wallet
Hodl
Hot Wallet
Initial Coin Offering (ICO)
Initial Exchange Offering (IEO)
Ledger
Light Node
Lightning
Litecoin
Locktime
Mainnet
Malleability
Master Private Key
Master Public Key
Master Seed
mBTC
Mempool
Merkle Tree
Mining
Mining Farm
Mining Pool
Mixing
MtGox
Multisig
Nonce
Not your keys,...
Opcode
Orphan block
P2PKH
P2SH
Paper Wallet
Peers
Pieter Wuille
Premining
Private key
Proof of Stake (PoS)
Proof of Work (PoW)
Pruning
Public key
Pump'n'Dump
Replace by Fee (RBF)
Ripemd160
Roger Ver
sat
Satoshi Nakamoto
Schnorr Signatures
Script
Segregated Witness (Segwit)
Sha256
Shitcoin
Sidechain
Signature
Signing
Simplified Payment Verification (SPV)
Smart Contract
Soft Fork
Stratum
Syncing
Testnet
Transaction
Transaction Fees
TransactionId (Txid)
Trezor
User Activated Soft Fork (UASF)
Utxo
Wallet Import Format (WIF)
Watch-Only Address
Whitepaper
List obviously not complete. Suggestions appreciated.
Refs:
https://bitcoin.org/en/developer-glossary https://en.bitcoin.it/wiki/Main_Page https://www.youtube.com/channel/UCgo7FCCPuylVk4luP3JAgVw https://www.youtube.com/useaantonop
submitted by PolaT1x to Bitcoin [link] [comments]

Question: Atomic Multisig Funding

Question: Atomic Multisig Funding
I have been looking for a way to set up a 2of2 multisig UTXO with the caveat that the funding TX is invalid until both parties have actually commited what ever amount they agreed to. My understanding of the topic is somewhat basic so the following sketch is the best scheme I could come up with:
https://preview.redd.it/o4rdyikqqqx31.png?width=2369&format=png&auto=webp&s=71ef2e9014f3974e2eca196252ecce959bbc2fc3
I was wondering whether there is a better way to achieve the same functionality. Reading about Bitcoin and "programmable money" I expected to be able to implement such a scheme entirely within the locking script. Thus being able to use a standard Bitcoin Wallet to fund the UTXO instead of having to write my own wallet or implementing my own peer2peer communitaction network... Going through the list of Opcodes however I find no ability to reference anything outside the script. I can push data to stacks, add, multiply, reverse and flip bits, etc. But afaict I have to define all data inside the script itself.
When an unlocking script is executed, is it entirely unaware of its context? Such as at which block height its inputs were mined at, or whether the UTXOs it generates are duplicates?
submitted by xep426 to Bitcoin [link] [comments]

The BCH blockchain is 165GB! How good can we compress it? I had a closer look

Someone posted their results for compressing the blockchain in the telegram group, this is what they were able to do:
Note, bitcoin by its nature is poorly compressible, as it contains a lot of incompressible data, such as public keys, addresses, and signatures. However, there's also a lot of redundant information in there, e.g. the transaction version, and it's usually the same opcodes, locktime, sequence number etc. over and over again.
I was curious and thought, how much could we actually compress the blockchain? This is actually very relevant: As I established in my previous post about the costs of a 1GB full node, the storage and bandwidth costs seem to be one of the biggest bottlenecks, and that CPU computation costs are actually the cheapest part, as were able almost to get away with ten year old CPUs.
Let's have a quick look at the transaction format and see what we can do. I'll have a TL;DR at the end if you don't care about how I came up with those numbers.
Before we just in, don't forget that I'll be streaming today again building a SPV node, as I've already posted about here. Last time we made some big progress, I think! Check it out here https://dlive.tv/TobiOnTheRoad. It'll start at around 15:00 UTC!

Version (32 bits)

There's currently two transaction types. Unless we add new ones, we can compress it to 1 bit (0 = version 1; and 1 = version 2).

Input/output count (8 to 72 bits)

This is the number of inputs the transaction has (see section 9 of the whitepaper). If the number of inputs is below 253, it will take 1 byte, and otherwise 2 to 8 bytes. This nice chart shows that, currently, 90% of Bitcoin transactions only have 2 inputs, sometimes 3.
A byte can represent 256 different numbers. Having this as the lowest granularity for input count seems quite wasteful! Also, 0 inputs is never allowed in Bitcoin Cash. If we represent one input with 00₂, two inputs with 01₂, three inputs with 10₂ and everything else with 11₂ + current format, we get away with only 2 bits more than 90% of the time.
Outputs are slightly higher, 3 or less 90% of the time, but the same encoding works fine.

Input (>320 bits)

There can be multiple of those. It has the following format:

Output (≥72 bits)

There can be multiple of those. They have the following format:

Lock time (32 bits)

This is FF FF FF FF most of the time and only occasionally transactions will be time-locked, and only change the meaning if a sequence number for an input is not FF FF FF FF. We can do the same trick as with the sequence number, such that most of the time, this will be just 1 bit.

Total

So, in summary, we have:
Nice table:
No. of inputs No. of outputs Uncompressed size Compressed size Ratio
1 1 191 bytes (1528 bits) 128 bytes (1023 bits) 67.0%
1 2 226 bytes (1808 bits) 151 bytes (1202 bits) 66.5%
2 1 339 bytes (2712 bits) 233 bytes (1861 bits) 68.6%
2 2 374 bytes (2992 bits) 255 bytes (2040 bits) 68.2%
2 3 408 bytes (3264 bits) 278 bytes (2219 bits) 68.0%
3 2 520 bytes (4160 bits) 360 bytes (2878 bits) 69.2%
3 3 553 bytes (4424 bits) 383 bytes (3057 bits) 69.1%
Interestingly, if we take a compression of 69%, if we were to compress the 165 GB blockchain, we'd get 113.8GB. Which is (almost) exactly the amount which 7zip was able to give us given ultra compression!
I think there's not a lot we can do to compress the transaction further, even if we only transmit public keys, signatures and addresses, we'd at minimum have 930 bits, which would still only be at 61% compression ratio (and missing outpoint and value). 7zip is probably also able to utilize re-using of addresses/public keys if someone sends to/from the same address multiple times, which we haven't explored here; but it's generally discouraged to send to the same address multiple times anyway so I didn't explore that. We'd still have signatures clocking in at 512 bits.
Note that the compression scheme I outlined here operates on a per transaction or per block basis (if we compress transacted satoshis per block), unlike 7zip, which compresses per blockchain.
I hope this was an interesting read. I expected the compression ratio to be higher, but still, if it takes 3 weeks to sync uncompressed, it'll take just 2 weeks compressed. Which can mean a lot for a business, actually.

I'll be streaming again today!

As I've already posted about here, I will stream about building an SPV node in Python again. It'll start at 15:00 UTC. Last time we made some big progress, I think! We were able to connect to my Bitcoin ABC node and send/receive our first version message. I'll do a nice recap of what we've done in that time, as there haven't been many present last time. And then we'll receive our first headers and then transactions! Check it out here: https://dlive.tv/TobiOnTheRoad.
submitted by eyeofpython to btc [link] [comments]

WARNING: Bitcoin Cash May Introduce Fatal Errors

Hi All,
I am long-term Bitcoin enthusiast and a core developer of PascalCoin, an infinitely scalable and completely original cryptocurrency (https://www.pascalcoin.org). I am also the developer of BlockchainSQL.io, an SQL-backend for Bitcoin.
I have been involved in Bitcoin community for a long time, and was a big supporter of hard-forking on Aug 1 2017 (https://redd.it/6i5qt1).
Due to the recent alarming proposals and the method which they are being pushed, I feel I have a moral duty to speak out to warn against what could be fatal technical errors for BCH.
As a full-time core developer at PascalCoin for last 18 months, I have dealt with DoS attacks, 51% attacks, timewarp attacks, mining centralisation attacks, out-of-consensus bugs, high-orphan rates and various other issues. Suffice to say, Layer-1 cryptocurrency development is hard and you don't really appreciate how fragile everything this until you work on a cryptocurrency codebase and manage a live mainnet (disclaimer: Albert Molina is main genius here, but it is a team effort).
Infinite Block Size: I know there has been much discussion here about the safety of "big blocks", and I generally agree with those arguments. However, the analysis I've seen always assumes the attackers are economically rational actors. On that basis, yes, the laws of economics will incentivise miners to naturally regulate the size of minted blocks. However, this does not include "economically irrational actors" such as competing coins, governments, banks, etc.
Allowing the natural limit of 32mb I think was a sensible move, but adding changes to the network protocol to allow 128mb blocks and then more, does not seem appropriate right now since:
It makes much more sense to leave the blocksize at 32mb until blocks reach ~16mb at which point the technical, security and reliability issues can be better understood and a more informed decision can be made by the BCH community.
Re-Enabling Opcodes: It's important to remember that these opcodes were disabled by Satoshi Nakamoto himself early on in the project due to ongoing bugs and instability arising out of the scripting engine (https://en.bitcoin.it/wiki/Common_Vulnerabilities_and_Exposures).
Later as the scripts became standardized, this issue was forgotten/abandoned since it would require a hard-fork to reactivate them and Core developers were against HF's. Personally, I think it's a good idea to re-enable them, but only after:
Infinite Script Size: One of the proposals I've seen that compliments re-enabling opcodes is to enable unbounded script sizes. From local discussions I've had with people promoting this idea, the "belief" is that miners will auto-regulate these as well. However, this is unproven.
Unbounded script-size introduce signficant attack-vectors in the areas of denial of service and stack/memory overflow (especially with all opcodes). One attack I can foresee here is the introduction of quadratic-hashing attack but inside a single transaction!
You have to understand that Ethereum had this problem from the onset and this is why they introduced the concept of "GAS". CPU power is a limited resource and if you don't pay for it, it will be completely abused. From what I've seen, there is no equivalent to GAS inside this proposal.
To understand the seriousness of this issue, think back to Ethereum's network instability before the DAO hacker. It went through many periods of DoS attacks as hackers cleverly found oversights in their opcode/EVM engine. This is a serious, proven and real-world attack-vector and not one to be "solved later". The BCH network could be brought to a grinding halt and easily with unbounded script sizes that do not pay any gas.
Voting/Signaling/Testnet: Even at PascalCoin, we go through a process of voting to enable all changes (https://www.pascalcoin.org/voting). We are barely a 10mill mcap coin and yet show more discipline with Voting, well-defined PIP design guidelines and Testnet releases. There is no excuse for BCH! It is a multi-billion dollar network and changes of this magnitude cannot be released so recklessly in such short time-frames.
I hope these comments are considered by stakeholders of BCH and the community at large. I am not a maximalist and support BCH, but the last week has revealed there is a serious technical void in BCH! The Bitcoin Core devs may not know much about economics, but they did know some things about security & reliability of cryptocurrency software.
PLEASE REMEMBER THERE ARE EXTREMELY TALENTED AND VICIOUS ATTACKERS OUT THERE and you need to be very careful with changes of this magnitude.
submitted by HermanSchoenfeld to btc [link] [comments]

Why CHECKDATASIG Does Not Matter

Why CHECKDATASIG Does Not Matter

In this post, I will prove that the two main arguments against the new CHECKDATASIG (CDS) op-codes are invalid. And I will prove that two common arguments for CDS are invalid as well. The proof requires only one assumption (which I believe will be true if we continue to reactive old op-codes and increase the limits on script and transaction sizes [something that seems to have universal support]):
ASSUMPTION 1. It is possible to emmulate CDS with a big long raw script.

Why are the arguments against CDS invalid?

Easy. Let's analyse the two arguments I hear most often against CDS:

ARG #1. CDS can be used for illegal gambling.

This is not a valid reason to oppose CDS because it is a red herring. By Assumption 1, the functionality of CDS can be emulated with a big long raw script. CDS would not then affect what is or is not possible in terms of illegal gambling.

ARG #2. CDS is a subsidy that changes the economic incentives of bitcoin.

The reasoning here is that being able to accomplish in a single op-code, what instead would require a big long raw script, makes transactions that use the new op-code unfairly cheap. We can shoot this argument down from three directions:
(A) Miners can charge any fee they want.
It is true that today miners typically charge transaction fees based on the number of bytes required to express the transaction, and it is also true that a transaction with CDS could be expressed with fewer bytes than the same transaction constructed with a big long raw script. But these two facts don't matter because every miner is free to charge any fee he wants for including a transaction in his block. If a miner wants to charge more for transactions with CDS he can (e.g., maybe the miner believes such transactions cost him more CPU cycles and so he wants to be compensated with higher fees). Similarly, if a miner wants to discount the big long raw scripts used to emmulate CDS he could do that too (e.g., maybe a group of miners have built efficient ways to propagate and process these huge scripts and now want to give a discount to encourage their use). The important point is that the existence of CDS does not impeded the free market's ability to set efficient prices for transactions in any way.
(B) Larger raw transactions do not imply increased orphaning risk.
Some people might argue that my discussion above was flawed because it didn't account for orphaning risk due to the larger transaction size when using a big long raw script compared to a single op-code. But transaction size is not what drives orphaning risk. What drives orphaning risk is the amount of information (entropy) that must be communicated to reconcile the list of transactions in the next block. If the raw-script version of CDS were popular enough to matter, then transactions containing it could be compressed as
....CDS'(signature, message, public-key)....
where CDS' is a code* that means "reconstruct this big long script operation that implements CDS." Thus there is little if any fundamental difference in terms of orphaning risk (or bandwidth) between using a big long script or a single discrete op code.
(C) More op-codes does not imply more CPU cycles.
Firstly, all op-codes are not equal. OP_1ADD (adding 1 to the input) requires vastly fewer CPU cycles than OP_CHECKSIG (checking an ECDSA signature). Secondly, if CDS were popular enough to matter, then whatever "optimized" version that could be created for the discrete CDS op-codes could be used for the big long version emmulating it in raw script. If this is not obvious, realize that all that matters is that the output of both functions (the discrete op-code and the big long script version) must be identical for all inputs, which means that is does NOT matter how the computations are done internally by the miner.

Why are (some of) the arguments for CDS invalid?

Let's go through two of the arguments:

ARG #3. It makes new useful bitcoin transactions possible (e.g., forfeit transactions).

If Assumption 1 holds, then this is false because CDS can be emmulated with a big long raw script. Nothing that isn't possible becomes possible.

ARG #4. It is more efficient to do things with a single op-code than a big long script.

This is basically Argument #2 in reverse. Argument #2 was that CDS would be too efficient and change the incentives of bitcoin. I then showed how, at least at the fundamental level, there is little difference in efficiency in terms of orphaning risk, bandwidth or CPU cycles. For the same reason that Argument #2 is invalid, Argument #4 is invalid as well. (That said, I think a weaker argument could be made that a good scripting language allows one to do the things he wants to do in the simplest and most intuitive ways and so if CDS is indeed useful then I think it makes sense to implement in compact form, but IMO this is really more of an aesthetics thing than something fundamental.)
It's interesting that both sides make the same main points, yet argue in the opposite directions.
Argument #1 and #3 can both be simplified to "CDS permits new functionality." This is transformed into an argument against CDS by extending it with "...and something bad becomes possible that wasn't possible before and so we shouldn't do it." Conversely, it is transformed to an argument for CDS by extending it with "...and something good becomes possible that was not possible before and so we should do it." But if Assumption 1 holds, then "CDS permits new functionality" is false and both arguments are invalid.
Similarly, Arguments #2 and #4 can both be simplified to "CDS is more efficient than using a big long raw script to do the same thing." This is transformed into an argument against CDS by tacking on the speculation that "...which is a subsidy for certain transactions which will throw off the delicate balance of incentives in bitcoin!!1!." It is transformed into an argument for CDS because "... heck, who doesn't want to make bitcoin more efficient!"

What do I think?

If I were the emperor of bitcoin I would probably include CDS because people are already excited to use it, the work is already done to implement it, and the plan to roll it out appears to have strong community support. The work to emulate CDS with a big long raw script is not done.
Moving forward, I think Andrew Stone's (thezerg1) approach outlined here is an excellent way to make incremental improvements to Bitcoin's scripting language. In fact, after writing this essay, I think I've sort of just expressed Andrew's idea in a different form.
* you might call it an "op code" teehee
submitted by Peter__R to btc [link] [comments]

A Response to Roger Ver

This post was inspired by the video “Roger Ver’s Thoughts on Craig Wright”. Oh, wait. Sorry. “Roger Ver’s Thoughts on 15th November Bitcoin Cash Upgrade”. Not sure how I mixed those two up.
To get it out of the way first and foremost: I have nothing but utmost respect for Roger Ver. You have done more than just about anyone to bring Bitcoin to the world, and for that you will always have my eternal gratitude. While there are trolls on both sides, the crucifixion of Bitcoin Jesus in the past week has been disheartening to see. As a miner, I respect his decision to choose the roadmap that he desires.
It is understandable that the Bitcoin (BCH) upgrade is causing a clash of personalities. However, what has been particularly frustrating is the lack of debate around the technical merits of Bitcoin ABC vs Bitcoin SV. The entire conversation has now revolved around Craig Wright the individual instead of what is best for Bitcoin Cash moving forward.
Roger’s video did confirm something about difference of opinions between the Bitcoin ABC and Bitcoin SV camps. When Roger wasn’t talking about Craig Wright, he spent a portion of his video discussing how individuals should be free to trade drugs without the intervention of the state. He used this position to silently attack Craig Wright for allegedly wanting to control the free trade of individuals. This appears to confirm what Craig Wright has been saying: that DATASIGVERIFY can be used to enable widely illegal use-cases of transactions, and Roger’s support for the ABC roadmap stems from his personal belief that Bitcoin should enable all trade regardless of legal status across the globe.
Speaking for myself, I think the drug war is immoral. I think human beings should be allowed to put anything they want in their own bodies as long as they are not harming others. I live in the United States and have personally seen the negative consequences of the drug war. This is a problem. The debasement of our currency and theft at the hands of central banks is a separate problem. Bitcoin was explicitly created to solve one of these problems.
Roger says in his video that “cryptocurrencies” were created to enable trade free from government oversight. However, Satoshi Nakamoto never once said this about Bitcoin. Satoshi Nakamoto was explicitly clear, however, that Bitcoin provided a solution to the debasement of currency.
“The root problem with conventional currency is all the trust that's required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust.” – Satoshi Nakamoto 02/11/2009
As we’ve written previously, the genesis block is often cited as a criticism of the 2008 bailout. However, the content of the article outlines that the bailout had already occurred. The article reveals that the government was poised to go a step further by buying up the toxic bank assets as part of a nationalization effort! In this scenario, according to the Times, "a 'bad bank' would be created to dispose of bad debts. The Treasury would take bad loans off the hands of troubled banks, perhaps swapping them for government bonds. The toxic assets, blamed for poisoning the financial system, would be parked in a state vehicle or 'bad bank' that would manage them and attempt to dispose of them while 'detoxifying' the main-stream banking system." The article outlines a much more nightmarish scenario than bank bailouts, one that would effectively remove any element of private enterprise from banking and use the State to seize the bank's assets.
The United States is progressively getting to a point where cannabis can be freely traded and used without legal repercussion. As a citizen, each election has given me the opportunity to bring us closer to enacting that policy at a national level. However, I have never had the ability to have a direct impact on preventing the debasement of the United States dollar. The dollar is manipulated by a “private” organization that is accountable to no one, and on a yearly basis we are given arbitrary interest rates that I have no control over. The government uses its arbitrary control over the money supply to enable itself to spend trillions of dollars it doesn’t have on foreign wars. Roger Ver has passionately argued against this in multiple videos available on the internet.
This is what Bitcoin promised to me when I first learned about it. This is what makes it important to me.
When the Silk Road was shut down, Bitcoin was unaffected. Bitcoin, like the US dollar, was just a tool that was used for transactions. There is an inherent danger that governments, whether you like it or not, would use every tool at their disposal to shut down any system that enabled at a protocol level illegal trade. They, rightfully or wrongfully, did this with the Silk Road. Roger’s video seems to hint that he thinks Bitcoin Cash should be an experiment in playing chicken with governments across the world about our right to trade freely without State intervention. The problem is that this is a vast underestimation of just how quickly Bitcoin (BCH) could be shut down if the protocol itself was the tool being used for illegal trade instead of being the money exchanged on top of illegal trade platforms.
I don’t necessarily agree or disagree with Roger’s philosophy on what “cryptocurrencies” should be. However, I know what Bitcoin is. Bitcoin is simply hard, sound money. That is boring to a lot of those in the “cryptocurrency” space, but it is the essential tool that enables freedom for the globe. It allows those in Zimbabwe to have sound currency free from the 50 billion dollar bills handed out like candy by the government. It allows those of us in the US to be free from the arbitrary manipulation of the Fed. Hard, sound, unchanging money that can be used as peer to peer digital cash IS the killer use case of Bitcoin. That is why we are here building on top of Bitcoin Cash daily.
When Roger and ABC want to play ball with governments across the globe and turn Bitcoin into something that puts it in legal jeopardy, it threatens the value of my bitcoins. Similar to the uncertainty we go through in the US every year as we await the arbitrary interest rates handed out by the Fed, we are now going to wait in limbo to see if governments will hold Bitcoin Cash miners responsible for enabling illegal trade at a protocol level. This is an insanely dangerous prospect to introduce to Bitcoin (BCH) so early in its lifespan. In one of Satoshi Nakamoto’s last public posts, he made it clear just how important it was to not kick the hornet’s nest that is government:
“It would have been nice to get this attention in any other context. WikiLeaks has kicked the hornet's nest, and the swarm is headed towards us.” – Satoshi Nakamoto 12/11/2010
Why anyone would want to put our opportunity of sound monetary policy in jeopardy to enable illegal trading at a base protocol level is beyond me. I respect anyone who has an anarcho-capitalist ideology. But, please don’t debase my currency by putting it at risk of legal intervention because you want to impose that ideology on the world.
We took the time to set up a Q&A with the Bitcoin SV developers Steve Shadders and Daniel Connolly. We posted on Reddit and gathered a ton of questions from the “community”. We received insanely intelligent, measured, and sane responses to all of the “attack vectors” proposed against increasing the block size and re-enabling old opcodes. Jonathon Toomim spent what must have been an hour or so asking 15+ questions in the Reddit thread of which we obtained answers to most. We have yet to see him respond to the technical answers given by the SV team. In Roger’s entire video today about the upcoming November fork, he didn’t once mention one reason why he disagrees with the SV roadmap. Instead, he has decided to go on Reddit and use the same tactics that were used by Core against Bitcoin Unlimited back in the day by framing the upcoming fork as “BCH vs BSV”, weeks before miners have had the ability to actually vote.
What Bitcoin SV wants to accomplish is enable sound money for the globe. This is boring. This is not glamorous. It is, however, the greatest tool of freedom we can give the globe. We cannot let ideology or personalities change that goal. Ultimately, it won’t. We have been continual advocates for miners, the ones who spend 1000x more investing in the network than the /btc trolls, to decide the future of BCH. We look forward to seeing what they choose on Nov 15th.
Roger mentions that it is our right to fork off and create our own chains. While that is okay to have as an opinion, Satoshi Nakamoto was explicit that we should be building one global chain. We adhere to the idea that miners should vote with their hashpower and determine the emergent chain after November 15th.
“It is strictly necessary that the longest chain is always considered the valid one. Nodes that were present may remember that one branch was there first and got replaced by another, but there would be no way for them to convince those who were not present of this. We can't have subfactions of nodes that cling to one branch that they think was first, others that saw another branch first, and others that joined later and never saw what happened. The CPU proof-of-worker proof-of-work vote must have the final say. The only way for everyone to stay on the same page is to believe that the longest chain is always the valid one, no matter what.” – Satoshi Nakamoto 11/09/2008
Edit: A clarification. I used the phrase "Bitcoin is boring". I want to clarify that Bitcoin itself is capable of far more than we've ever thought possible, and this statement is one I will be elaborating on further in the future.
submitted by The_BCH_Boys to btc [link] [comments]

Possible to do loops in Forth-like Bitcoin Script language with two stacks?

Bitcoin Script is a Forth-like stack-based language. It uses a main stack as well as an alt stack. There is no built-in loop.

AFAIK, Forth interpreter implements "DO...LOOP" by labelling DO and conditionally jumping back to the label at LOOP. Bitcoin Script does not support labelling and conditional jump, at least directly. Is there a way to use its OPCODEs to implement loop with the help of the alt stack? If impossible, why?
submitted by sinoTrinity to Forth [link] [comments]

TIL in 2011 a user running a modified mining client intentionally underpaid himself 1 satoshi, which is the only time bitcoin have ever truly been destroyed.

In block 124724 you'll find txid 5d80a29b which has a payout of 49.99999999 BTC at a time when the block reward was 50 BTC. A transaction fee of 0.01 BTC was also forfeited. This bitcoin no longer exists anywhere in the network, as opposed to "burned" coins which technically still exist in a wallet which no one can ever access (ex: 1BitcoinEaterAddressDontSendf59kuE).
On bitcointalk user midnightmagic explains a deeper meaning behind this:
I did it as a tribute to our missing Satoshi: we are missing Satoshi, and now the blockchain is missing 1 Satoshi too, for all time.
EDIT: Users have pointed out in the comments that this isn't actually the only time coins have been destroyed, there are actually several different ways coins have been destroyed in the past. sumBTC also points out that the satoshi wasn't destroyed-- it was never created in the first place.
Another interesting way to destroy coins is by creating a duplicate transaction. This is again done with a modified client. For example see block 91722 and block 91880. They both contain txid e3bf3d07. The newer transaction essentially overwrites the old transaction, destroying the previous one and the associated coins. This issue was briefly discussed on Github #612 and determined to not be a big deal. In 2012 they realized that duplicated transactions could be used as part of an attack on the network so this was fixed and is no longer possible.
Provably burning coins was actually added as a feature in 2014 via OP_RETURN. Transactions marked with this opcode MAY be pruned from a client's unspent transaction database. Whether or not these coins still exist is a matter of opinion.
Finally, at least 1,000 blocks forfeited transactions fees due to a software bug. Forfeited transaction fees are lost forever and are unaccounted for in any wallet.
Further reading: https://bitcoin.stackexchange.com/questions/30862/how-much-bitcoin-is-lost-on-average/30864#30864 https://bitcoin.stackexchange.com/questions/38994/will-there-be-21-million-bitcoins-eventually/38998#38998
submitted by NewLlama to Bitcoin [link] [comments]

Update UNO wallets to 0.11.0

UNOBTANIUM WALLET UPDATE. It's time to update your UNO QT wallet to 0.11.0 for bip65 support and misc fixes.
Get a compiled wallet at https://unobtanium.uno or roll your own from https://github.com/unobtanium-official/
This is an important update for enabling BIP65 consensus that will enable checklocktimeverify
BIP65 has been active in Bitcoin for nearly 5 years. The UNO network wants to enable this to enhance exchange safety and enable p2p 'atomic swap' apps. We need a consensus on the UNO network, which we have, but we want EVERYONE using the 0.11.0 wallet for the best network experience.
More info from the Bitcoin implementation of BIP65:
" In late 2015, the BIP65 soft fork[3] redefined the NOP2 opcode as the CheckLockTimeVerify (CLTV) opcode, allowing transaction outputs (rather than whole transactions) to be encumbered by a timelock. When the CLTV opcode is called, it will cause the script to fail unless the nLockTime on the transaction is equal to or greater than the time parameter provided to the CLTV opcode. Since a transaction may only be included in a valid block if its nLockTime is in the past, this ensures the CLTV-based timelock has expired before the transaction may be included in a valid block. "
UPGRADE TO 0.11.0 TODAY!
submitted by FallingKnife_ to Unobtanium [link] [comments]

Some thoughts about OP_LSHIFT/OP_RSHIFT

For whatever usage it may apply, here's something about the OP_LSHIFT & OP_RSHIFT.
As u/cryptocached mentioned, some OP-codes returned (or altered at Satoshi's will) to the BSV source code without any use-case as far as I know, but please comment if you have any since the questions in 9ce492 were never answered.
Anyway, I did take a look at the old source (v0.1) to understand what it does and why it was removed. I couldn't find any detailed bug-report relating to crashes in v0.1 to get more information about this specific case, but here's why I think why it could crash in the old version. I might be wrong, so please correct.
v0.1: script.cpp, line 595-605
```C++ case OP_LSHIFT: if (bn2 < bnZero) return false; bn = bn1 << bn2.getulong(); break;
case OP_RSHIFT: if (bn2 < bnZero) return false; bn = bn1 >> bn2.getulong(); break;
```
My simple guess would be it crashed on a bit-shift with some large value, although I haven't verified it. (ULONG_MAX = 4294967295)
The CBigNum is a signed integer (positive and negative values); bn1 and bn2 both come from the stack as CBigNum. The shift-value bn2 must be a positive value (unsigned int), and is validated with bn2 < bnZero.
Since CBigNum is a signed integer, a bit-shift would always preserve the sign-bit(!), indicating a positive or negative number, which is also defined here: https://en.bitcoin.it/wiki/Script
I think both shift OP-codes can be combined by implementing it in a slightly different way with only a single OP_SHIFT. Here's an idea for a shorter version by using the sign-bit (negative/left-shift):
C++ // - Consider it pseudo-code; type-casting me be incorrect // - Unsafe and probably contains the same crash as in v0.1 case OP_SHIFT: if (bn2 < bnZero) { // the `* -1` invert the sign-bit. makes value positive // maybe CBigNum.abs(bn2) would be better? bn = bn1 << (unsigned int)(bn2.getint() * -1); } else { bn = bn1 >> (unsigned int)bn2.getint(); }
Although bit-shifting for CBigNum should be consistent, my understanding from the following link is that bit-shifting can behave differently in various compilers and thus give unexpected/unwanted results.
There’s nothing inherently bad about running with a ball in your hands and also there’s nothing inherently bad about shifting a 32-bit number by 33 bit positions. But one is against the rules of basketball and the other is against the rules of C and C++. In both cases, the people designing the game have created arbitrary rules and we either have to play by them or else find a game we like better.
source: blog.regehr.org/archives/213
https://stackoverflow.com/q/980565
https://stackoverflow.com/q/18790923
In my opinion, bit-shifting by a variable value is a big warning(!) and this snippet of code is obsolete for many years now.

Next part is the new BSV-implementation.
First the good part, actually a very good part: If you implement the OP_LSHIFT/OP_RSHIFT with your own implementation, this function should make the result consistent, which is a must!
In the new implementation the maximum bit-shifts is 7 (n % 8) and is done for each separate byte the sequence, which circumvents compiler specific implementations. It would also limit maximum shifted bits to the length of the byte sequence x.
As of readability, be your own judge.
BSV interpreter.cpp
``` typedef std::vector valtype;
[...]
inline uint8_t make_rshift_mask(size_t n) { static uint8_t mask[] = {0xFF, 0xFE, 0xFC, 0xF8, 0xF0, 0xE0, 0xC0, 0x80}; return mask[n]; }
inline uint8_t make_lshift_mask(size_t n) { static uint8_t mask[] = {0xFF, 0x7F, 0x3F, 0x1F, 0x0F, 0x07, 0x03, 0x01}; return mask[n]; }
// shift x right by n bits, implements OP_RSHIFT static valtype RShift(const valtype &x, int n) { int bit_shift = n % 8; int byte_shift = n / 8;
uint8_t mask = make_rshift_mask(bit_shift); uint8_t overflow_mask = ~mask; valtype result(x.size(), 0x00); for (int i = 0; i < (int)x.size(); i++) { int k = i + byte_shift; if (k < (int)x.size()) { uint8_t val = (x[i] & mask); val >>= bit_shift; result[k] |= val; } if (k + 1 < (int)x.size()) { uint8_t carryval = (x[i] & overflow_mask); carryval <<= 8 - bit_shift; result[k + 1] |= carryval; } } return result; 
}
// shift x left by n bits, implements OP_LSHIFT static valtype LShift(const valtype &x, int n) { int bit_shift = n % 8; int byte_shift = n / 8;
uint8_t mask = make_lshift_mask(bit_shift); uint8_t overflow_mask = ~mask; valtype result(x.size(), 0x00); for (int i = x.size() -1; i >= 0; i--) { int k = i - byte_shift; if (k >= 0) { uint8_t val = (x[i] & mask); val <<= bit_shift; result[k] |= val; } if (k - 1 >= 0) { uint8_t carryval = (x[i] & overflow_mask); carryval >>= 8 - bit_shift; result[k - 1] |= carryval; } } return result; 
}
[...]
case OP_LSHIFT: { // (x n -- out) if (stack.size() < 2) { return set_error(serror, SCRIPT_ERR_INVALID_STACK_OPERATION); }
const valtype vch1 = stacktop(-2); CScriptNum n(stacktop(-1), fRequireMinimal); if (n < 0) { return set_error(serror, SCRIPT_ERR_INVALID_NUMBER_RANGE); } popstack(stack); popstack(stack); stack.push_back(LShift(vch1, n.getint())); 
} break;
case OP_RSHIFT: { // (x n -- out) if (stack.size() < 2) { return set_error( serror, SCRIPT_ERR_INVALID_STACK_OPERATION); }
const valtype vch1 = stacktop(-2); CScriptNum n(stacktop(-1), fRequireMinimal); if (n < 0) { return set_error(serror, SCRIPT_ERR_INVALID_NUMBER_RANGE); } popstack(stack); popstack(stack); stack.push_back(RShift(vch1, n.getint())); 
} break; ```
Now the less good part: There is an essential difference in behaviour between the original OP_LSHIFT/OP_RSHIFT and the new BSV OP_LSHIFT/OP_RSHIFT, but again, since I don't know any use-case of this function, I don't know what the impact is, if any.
In the original version, the bit-shift was done on a CBigNum, but this has changed into a byte sequence (std::vector).
The old definition:
Shifts a left b bits, preserving sign. disabled.
Shifts a right b bits, preserving sign. disabled.
https://en.bitcoin.it/wiki/Script
The new 'definition' is written as:
For the LSHIFT and RSHIFT opcodes, these opcodes were updated to be bitwise operators which means that they operate on byte sequences, not numeric values. This means that they do not have special treatment for the sign bit and they don’t overflow or underflow. They operate on all sizes of byte sequences, from zero-length up to the maximum element size (520 bytes).
Previously, the LSHIFT and RSHIFT operated on numeric values. This same functionality can be achieved through the use of script, possibly including the bitwise LSHIFT and RSHIFT opcodes.
https://www.reddit.com/btc/comments/9ce492/bsvs_new_op_lshift_and_op_rshift_are_not/
The new behaviour allows the script to shift bits on a chunk of data, but won't take the sign-bit into account since it is not a number anymore. If this value would be used as a number, like in the original version, depending on the input-data and bitshift-value, a signed number can become unsigned and vice-versa!
Although the new behaviour is pretty straight forward, shifting some bytes by a number of bits, the code is not easy to read and was added without documentation and discussion. I did some minimal testing on this code which produced the expected correct results as written in the new definition.
There can be some improvements. If LShift() or RShift() is called with n = INT_MIN, it's a crash, also n = -130008466 is a crash, but n = -130008467 is no crash. Yes, the check n > 0 is at line 934 and line 951, but the function itself is at line 49 and line 75, without check. If the code stays the same, please add a comment like negative values of n may sometimes crash.
I think this shift function could be rewritten as a single function, making the sign-bit useful, and this function hopefully crash-free (needs to be checked of course).
static valtype DataBitShift(const valtype &x, int n) { if n > 0 { [...] } else if n < 0 { [...] } else { /* no shift */ } }
Since the boost-lib is included, some parts can be simplified with the boost::dynamic_bitset which should have the exact same behaviour. I don't know about the performance difference, but it can definitely be used when writing test cases and getting rid of the new to_bitpattern() in opcode_tests.cpp, which is not tested, but I assume works OK.
// Here a starter: valtype x = { 0x9F, 0x11, 0xF5, 0x55 }; boost::dynamic_bitset bitMap(x.begin(), x.end()); std::cout << bitMap << std::endl;

I hope there is a good reason for enabling this 'feature' without consensus, because shifting a variable sized block of data by a variable number of bits without any use-case seems weird to me. Who needs the ability to shift 0-4160 bits (0-520 bytes) in any direction? And why is the sign-bit ignored?
Satoshi's vision/implementation is not the holy grail. Difference in behaviour (v0.1 vs BSV) may absolutely be justified, but we need some use-cases, not just examples of usage.
Use this code as you like, it's patent-free.
submitted by varvoid to btc [link] [comments]

Subreddit Stats: btc posts from 2019-05-28 to 2019-06-07 10:40 PDT

Period: 10.34 days
Submissions Comments
Total 850 14116
Rate (per day) 82.22 1245.55
Unique Redditors 440 1828
Combined Score 26564 50495

Top Submitters' Top Submissions

  1. 3690 points, 33 submissions: MemoryDealers
    1. Brains..... (420 points, 94 comments)
    2. The first trade has already happened on Local.bitcoin.com! (193 points, 67 comments)
    3. China is already leading the way with the most trades done on local.bitcoin.com, followed by India. We really are helping free the world! (192 points, 58 comments)
    4. More than 100 BCH has been raised in just a few days to help support BCH protocol development! (180 points, 63 comments)
    5. The Bitcoin Cash Protocol Development Fund has already raised more than 10% of its goal from 467 separate transactions!!! (180 points, 58 comments)
    6. Local.bitcoin.com (159 points, 80 comments)
    7. The BCH miners are good guy heroes! (152 points, 161 comments)
    8. The Bitcoin.com YouTube channel just pased 25K subscribers (147 points, 19 comments)
    9. Ways to trigger a BTC maximalist: Remind them that because they didn't increase the block size, fees will eventually climb to dumb levels again. This will put brakes on it's bull trend, and funnel cash into alts instead. (141 points, 107 comments)
    10. Why more and more people are switching from BTC to BCH (137 points, 193 comments)
  2. 1561 points, 20 submissions: money78
    1. "Not a huge @rogerkver fan and never really used $BCH. But he wiped up the floor with @ToneVays in Malta, and even if you happen to despise BCH, it’s foolish and shortsighted not to take these criticisms seriously. $BTC is very expensive and very slow." (261 points, 131 comments)
    2. Jonathan Toomim: "At 32 MB, we can handle something like 30% of Venezuela's population using BCH 2x per day. Even if that's all BCH ever achieved, I'd call that a resounding success; that's 9 million people raised out of poverty. Not a bad accomplishment for a hundred thousand internet geeks." (253 points, 180 comments)
    3. CEO of CoinEx: "CoinEx already add SLP token solution support. The first SLP token will list on CoinEx Soon. Also welcome apply to list SLP tokens on CoinEx." (138 points, 18 comments)
    4. "While Ethereum smart contracts have a lot more functionality than those in Bitcoin Cash, with the upcoming CashScript we've tried to replicate a big part of the workflow, hopefully making it easier for developers to engage with both of these communities. Check it out 🚀" (120 points, 35 comments)
    5. Bitcoin ABC 0.19.7 is now available! This release includes RPC and wallet improvements, and a new transaction index database. See the release notes for details. (104 points, 5 comments)
    6. Vin Armani: "Huge shout out to the @BitcoinCom wallet team! I just heard from a very authoritative source that multi-output BIP 70 support has been successfully tested and will be in a near-term future release. Now, the most popular BCH wallet will support Non-Custodial Financial Services!" (88 points, 23 comments)
    7. BSV folks: Anything legal is good...We want our coin to be legal! (79 points, 66 comments)
    8. BCH fees vs BTC fees (78 points, 85 comments)
    9. "This @CashShuffle on BCH looks awesome. The larger blocksize on BCH allows for cheap on-chain transactions. @CashShuffle leverages this in a very creative way to gain privacy. Ignoring the tribalism, it's fascinating to watch BCH vs. BTC compete in the marketplace." (77 points, 3 comments)
    10. Bitcoin Cash the best that bitcoin can be...🔥💪 (60 points, 9 comments)
  3. 1413 points, 18 submissions: Egon_1
    1. "The claim “Bitcoin was purpose-built to first be a Store of Value” is false. In this article I've posting every single instance I could find across everything Satoshi ever wrote related to store of value or payments. It wasn't even close. Payments win." (299 points, 82 comments)
    2. The Art of Rewriting History ... File this under Deception! (184 points, 69 comments)
    3. Today's Next Block Fee: BTC ($3.55) and BCH ($0.00). Enjoy! (120 points, 101 comments)
    4. Andreas Brekken: "The maxi thought leaders have a ⚡in their username but can't describe a bidirectional payment channel. Ask questions? They attack you until you submit or leave. Leave? You're a scammer....." (115 points, 11 comments)
    5. Tone Vays: "So I will admit, I did terrible in the Malta Debate vs @rogerkver [...]" (107 points, 95 comments)
    6. This Week in Bitcoin Cash (96 points, 10 comments)
    7. “There was no way to win that debate. Roger came armed with too much logic and facts.” (78 points, 1 comment)
    8. BTC supporter enters a coffee shop: "I like to pay $3 premium security fee for my $4 coffee ☕️" (64 points, 100 comments)
    9. Matt Corallo: "... the worst parts of Bitcoin culture reliably come from folks like @Excellion and a few of the folks he has hired at @Blockstream ..." (63 points, 43 comments)
    10. Angela Walch: "Is there a resource that keeps an up-to-date list of those who have commit access to the Bitcoin Core Github repo & who pays them for their work on Bitcoin? In the past, getting this info has required digging. Is that still the case? " (57 points, 5 comments)
  4. 852 points, 11 submissions: jessquit
    1. PSA: BTC not working so great? Bitcoin upgraded in 2017. The upgraded Bitcoin is called BCH. There's still time to upgrade! (185 points, 193 comments)
    2. Nobody uses Bitcoin Cash (178 points, 89 comments)
    3. Yes, Bitcoin was always supposed to be gold 2.0: digital gold that you could use like cash, so you could spend it anywhere without needing banks and gold notes to make it useful. So why is Core trying to turn it back into gold 1.0? (112 points, 85 comments)
    4. This interesting conversation between Jonathan Toomim and @_drgo where jtoomim explains how large blocks actually aren't a centralization driver (89 points, 36 comments)
    5. This Twitter conversation between Jonathan Toomim and Adam Back is worth a read (75 points, 15 comments)
    6. In October 2010 Satoshi proposed a hard fork block size upgrade. This proposed upgrade was a fundamental factor in many people's decision to invest, myself included. BCH implemented this upgrade. BTC did not. (74 points, 41 comments)
    7. what do the following have in common: Australia, Canada, USA, Hong Kong, Jamaica, Liberia, Namibia, New Zealand, Singapore, Taiwan, Caribbean Netherlands, East Timor, Ecuador, El Salvador, the Federated States of Micronesia, the Marshall Islands, Palau, Zimbabwe (47 points, 20 comments)
    8. Core myth dispelled: how Bitcoin offers sovereignty (45 points, 65 comments)
    9. Satoshi's Speedbump: how Bitcoin's goldlike scarcity helps address scaling worries (25 points, 9 comments)
    10. Greater Fool Theory (14 points, 13 comments)
  5. 795 points, 7 submissions: BitcoinXio
    1. Erik Voorhees on Twitter: “I wonder if you realize that if Bitcoin didn’t work well as a payment system in the early days it likely would not have taken off. Many (most?) people found the concept of instant borderless payments captivating and inspiring. “Just hold this stuff” not sufficient.” (297 points, 68 comments)
    2. On Twitter: “PSA: The Lightning Network is being heavily data mined right now. Opening channels allows anyone to cluster your wallet and associate your keys with your IP address.” (226 points, 102 comments)
    3. Shocking (not): Blockstream has had a hard time getting business due to their very bad reputation (73 points, 25 comments)
    4. While @PeterMcCormack experiments with his #LightningNetwork bank, waiting over 20 seconds to make a payment, real P2P #Bitcoin payments have already arrived on #BitcoinCash. (66 points, 94 comments)
    5. This is what we’re up against. Mindless sheep being brain washed and pumping Bitcoin (BTC) as gold to try to make a buck. (56 points, 29 comments)
    6. Tuur Demeester: “At full maturity, using the Bitcoin blockchain will be as rare and specialized as chartering an oil tanker.” (54 points, 61 comments)
    7. ‪Bitcoin Cash 101: What Happens When We Decentralize Money? ‬ (23 points, 2 comments)
  6. 720 points, 2 submissions: InMyDayTVwasBooks
    1. A Reminder Why You Shouldn’t Use Google. (619 points, 214 comments)
    2. 15 Years Ago VS. Today: How Tech Scales (101 points, 53 comments)
  7. 485 points, 15 submissions: JonyRotten
    1. Cashscript Is Coming, Bringing Ethereum-Like Smart Contracts to Bitcoin Cash (96 points, 6 comments)
    2. Localbitcoins Removes In-Person Cash Trades Forcing Traders to Look Elsewhere (86 points, 26 comments)
    3. Bitcoin.com's Local Bitcoin Cash Marketplace Is Now Open for Trading (48 points, 22 comments)
    4. Report Insists 'Bitcoin Was Not Purpose-Built to First Be a Store of Value' (48 points, 8 comments)
    5. BCH Businesses Launch Development Fund for Bitcoin Cash (36 points, 1 comment)
    6. Another Aspiring Satoshi Copyrights the Bitcoin Whitepaper (31 points, 0 comments)
    7. Bitcoin Cash and SLP-Fueled Badger Wallet Launches for iOS (27 points, 0 comments)
    8. Bitcoin Mining With Solar: Less Risky and More Profitable Than Selling to the Grid (26 points, 0 comments)
    9. Former Mt Gox CEO Mark Karpeles Announces New Blockchain Startup (25 points, 25 comments)
    10. Mixing Service Bitcoin Blender Quits After Bestmixer Takedown (23 points, 7 comments)
  8. 426 points, 2 submissions: btcCore_isnt_Bitcoin
    1. Ponder the power of propaganda, Samson Mow, Adam Back and Greg Maxwell all know how import control of bitcoin is. (394 points, 98 comments)
    2. How many Bitcoin Core supporters does it take to change a light bulb? (32 points, 35 comments)
  9. 369 points, 3 submissions: where-is-satoshi
    1. Currently you must buy 11,450 coffees on a single Lightning channel to match the payment efficiency of Bitcoin BCH - you will also need to open an LN channel with at least $47,866 (230 points, 173 comments)
    2. North Queensland's Beauty Spot finds Bitcoin BCH a thing of beauty (74 points, 6 comments)
    3. Can't start the day without a BCHinno (65 points, 9 comments)
  10. 334 points, 5 submissions: AD1AD
    1. You Can Now Send Bitcoin Cash to Mobile Phones in Electron Cash Using Cointext! (132 points, 32 comments)
    2. Merchants are Dropping Multi-Coin PoS for One Cryptocurrency: Bitcoin Cash (73 points, 21 comments)
    3. A Stellar Animated Video from CoinSpice Explaining how CashShuffle Works Under the Hood! (67 points, 10 comments)
    4. If you haven't seen the "Shit Bitcoin Cash Fanatics Say" videos from Scott Rose (The Inspirational Nerd), YOU NEED TO DO IT NOWWW (50 points, 7 comments)
    5. New Video from Bitcoin Out Loud: "Can You Store Data on the Bitcoin Blockchain?" (Spoiler: Not really.) (12 points, 10 comments)
  11. 332 points, 6 submissions: eyeofpython
    1. I believe the BCH denomination is the best (in contrast to bits, cash and sats), if used with eight digits & spaces: 0.001 234 00 BCH. This way both the BCH and the satoshi amount is immediately clear. Once the value of a satoshi gets close to 1¢, the dot can simply be dropped. (112 points, 41 comments)
    2. Only after writing more BCH Script I realized how insanely usefull all the new opcodes are — CDS and those activated/added back in May '18. Kudos to the developers! (104 points, 22 comments)
    3. CashProof is aready so awesome it can formally prove all optimizations Spedn uses, except one. Great news for BCH smart contracts! (51 points, 6 comments)
    4. Proposal for a new opcode: OP_REVERSE (43 points, 55 comments)
    5. My response on your guy's critisism of OP_REVERSE and the question of why the SLP protocol (and others) don't simply switch to little endian (20 points, 25 comments)
    6. random post about quantum physics (both relevant and irrelevant for Bitcoin at the same time) (2 points, 11 comments)
  12. 322 points, 6 submissions: unitedstatian
    1. BCH is victim to one of the biggest manipulation campaigns in social media: Any mention of BCH triggered users instantly to spam "BCASH".. until BSV which is a BCH fork and almost identical to it pre-November fork popped out of nowhere and suddenly social media is spammed with pro-BSV posts. (131 points, 138 comments)
    2. LocalBitcoins just banned cash. It really only goes to show everything in the BTC ecosystem is compromised. (122 points, 42 comments)
    3. The new narrative of the shills who moved to promoting bsv: Bitcoin was meant to be government-friendly (33 points, 138 comments)
    4. Hearn may have been the only sober guy around (21 points, 29 comments)
    5. PSA: The economical model of the Lightning Network is unsound. The LN will support different coins which will be interconnected and since the LN tokens will be transacted instead of the base coins backing them up their value will be eroded over time. (14 points, 8 comments)
    6. DARPA-Funded Study Looks at How Crypto Chats Spread on Reddit (1 point, 0 comments)
  13. 313 points, 8 submissions: CreativeName44
    1. Venezuela Hidden Bitcoin Cash paper wallet claimed with 0.17468 BCH! Congrats to the one who found it! (80 points, 0 comments)
    2. Alright BCH Redditors, Let's make some HUGE noise!! Announcing The NBA finals Toronto Raptors Hidden BCH Wallet!! (60 points, 9 comments)
    3. FindBitcoinCash gaining traction around the world - Calling out to Bitcoin Cashers to join the fun!! (41 points, 0 comments)
    4. The Toronto Raptors Bitcoin Cash Wallet has been hidden: Address qz72j9e906g7pes769yp8d4ltdmh4ajl9vf76pj0v9 (PLS RT - Some local media tagged on it) (39 points, 0 comments)
    5. This is the next BitcoinCash wallet that is going to be hidden, hopefully REALLY soon! (36 points, 13 comments)
    6. Bitcoin Cash Meetups From Around the World added to FindBitcoinCash (25 points, 0 comments)
    7. FindBitcoinCash Wallets in other languages English/Spanish/Lithuanian/Swedish/Korean (20 points, 18 comments)
    8. Thank you for a great article!! (12 points, 0 comments)
  14. 312 points, 1 submission: scriberrr
    1. WHY? (312 points, 49 comments)
  15. 311 points, 4 submissions: Anenome5
    1. Libertarian sub GoldandBlack is hosting a free, live online workshop about how to setup and use Electron Cash on Sat 1st June via discord, including how to use Cashshuffle, with a Q&A session to follow. All are invited! (119 points, 40 comments)
    2. For anyone who still hasn't seen this, here is Peter Rizun and Andrew Stone presenting their research on how to do 1 gigabyte blocks, all the way back in 2017 at the Scaling Bitcoin Conference. The BTC camp has known we can scale bitcoin on-chain for years, they just don't want to hear it. (92 points, 113 comments)
    3. @ the trolls saying "No one uses Bitcoin Cash", let's look at the last 60 blocks... (72 points, 84 comments)
    4. Research Reveals Feasibility of 1TB Blocks, 7M Transactions per Second (28 points, 22 comments)
  16. 293 points, 2 submissions: BeijingBitcoins
    1. /Bitcoin mods are censoring posts that explain why BitPay has to charge an additional fee when accepting BTC payments (216 points, 110 comments)
    2. Meetups and adoption don't just happen organically, but are the result of the hard work of passionate community members. There are many others out there but these girls deserve some recognition! (77 points, 9 comments)
  17. 282 points, 1 submission: EddieFrmDaBlockchain
    1. LEAKED: Attendee List for Buffet Charity Lunch (282 points, 98 comments)
  18. 273 points, 4 submissions: HostFat
    1. Breakdown of all Satoshi’s Writings Proves Bitcoin not Built Primarily as Store of Value (159 points, 64 comments)
    2. Just to remember - When you are afraid that the market can go against you, use the state force. (48 points, 5 comments)
    3. CypherPoker.JS v0.5.0 - P2P Poker - Bitcoin Cash support added! (35 points, 3 comments)
    4. Feature request as standard for all bch mobile wallets (31 points, 12 comments)
  19. 262 points, 3 submissions: CaptainPatent
    1. Lightning Network capacity takes a sudden dive well below 1k BTC after passing that mark back in March. (97 points, 149 comments)
    2. Yeah, how is it fair that Bitpay is willing to eat a $0.0007 transaction fee and not a $2+ transaction fee?! (89 points, 59 comments)
    3. BTC Fees amplified today by last night's difficulty adjustment. Current (peak of day) next-block fees are testing new highs. (76 points, 59 comments)
  20. 262 points, 1 submission: Badrush
    1. Now I understand why Bitcoin Developers hate on-chain solutions like increasing block sizes. (262 points, 100 comments)

Top Commenters

  1. jessquit (2337 points, 242 comments)
  2. LovelyDay (1191 points, 160 comments)
  3. Ant-n (1062 points, 262 comments)
  4. MemoryDealers (977 points, 62 comments)
  5. jtoomim (880 points, 108 comments)
  6. 500239 (841 points, 142 comments)
  7. jonald_fyookball (682 points, 86 comments)
  8. ShadowOfHarbringer (672 points, 110 comments)
  9. money78 (660 points, 41 comments)
  10. playfulexistence (632 points, 76 comments)
  11. Bagatell_ (586 points, 72 comments)
  12. Big_Bubbler (552 points, 196 comments)
  13. homopit (551 points, 79 comments)
  14. Anenome5 (543 points, 130 comments)
  15. WippleDippleDoo (537 points, 111 comments)
  16. MobTwo (530 points, 52 comments)
  17. FalltheBanks3301 (483 points, 87 comments)
  18. btcfork (442 points, 115 comments)
  19. chainxor (428 points, 71 comments)
  20. eyeofpython (425 points, 78 comments)

Top Submissions

  1. A Reminder Why You Shouldn’t Use Google. by InMyDayTVwasBooks (619 points, 214 comments)
  2. Brains..... by MemoryDealers (420 points, 94 comments)
  3. Ponder the power of propaganda, Samson Mow, Adam Back and Greg Maxwell all know how import control of bitcoin is. by btcCore_isnt_Bitcoin (394 points, 98 comments)
  4. WHY? by scriberrr (312 points, 49 comments)
  5. "The claim “Bitcoin was purpose-built to first be a Store of Value” is false. In this article I've posting every single instance I could find across everything Satoshi ever wrote related to store of value or payments. It wasn't even close. Payments win." by Egon_1 (299 points, 82 comments)
  6. Erik Voorhees on Twitter: “I wonder if you realize that if Bitcoin didn’t work well as a payment system in the early days it likely would not have taken off. Many (most?) people found the concept of instant borderless payments captivating and inspiring. “Just hold this stuff” not sufficient.” by BitcoinXio (297 points, 68 comments)
  7. LEAKED: Attendee List for Buffet Charity Lunch by EddieFrmDaBlockchain (282 points, 98 comments)
  8. Now I understand why Bitcoin Developers hate on-chain solutions like increasing block sizes. by Badrush (262 points, 100 comments)
  9. "Not a huge @rogerkver fan and never really used $BCH. But he wiped up the floor with @ToneVays in Malta, and even if you happen to despise BCH, it’s foolish and shortsighted not to take these criticisms seriously. $BTC is very expensive and very slow." by money78 (261 points, 131 comments)
  10. Jonathan Toomim: "At 32 MB, we can handle something like 30% of Venezuela's population using BCH 2x per day. Even if that's all BCH ever achieved, I'd call that a resounding success; that's 9 million people raised out of poverty. Not a bad accomplishment for a hundred thousand internet geeks." by money78 (253 points, 180 comments)

Top Comments

  1. 109 points: mossmoon's comment in Now I understand why Bitcoin Developers hate on-chain solutions like increasing block sizes.
  2. 104 points: _degenerategambler's comment in Nobody uses Bitcoin Cash
  3. 96 points: FreelanceForCoins's comment in A Reminder Why You Shouldn’t Use Google.
  4. 94 points: ThomasZander's comment in "Not a huge @rogerkver fan and never really used $BCH. But he wiped up the floor with @ToneVays in Malta, and even if you happen to despise BCH, it’s foolish and shortsighted not to take these criticisms seriously. $BTC is very expensive and very slow."
  5. 91 points: cryptotrillionaire's comment in The Art of Rewriting History ... File this under Deception!
  6. 87 points: tjonak's comment in A Reminder Why You Shouldn’t Use Google.
  7. 86 points: money78's comment in Tone Vays: "So I will admit, I did terrible in the Malta Debate vs @rogerkver [...]"
  8. 83 points: discoltk's comment in "Not a huge @rogerkver fan and never really used $BCH. But he wiped up the floor with @ToneVays in Malta, and even if you happen to despise BCH, it’s foolish and shortsighted not to take these criticisms seriously. $BTC is very expensive and very slow."
  9. 79 points: jessquit's comment in Ways to trigger a Shitcoin influencer Part 1: Remind them that’s it’s very likely they got paid to shill fake Bitcoin to Noobs
  10. 78 points: PaladinInc's comment in The BCH miners are good guy heroes!
Generated with BBoe's Subreddit Stats
submitted by subreddit_stats to subreddit_stats [link] [comments]

Why should OP_CHECKDATASIG/VERIFY be privileged to use two of the very few free one byte opcodes?

In Script all opcodes are currently encoded as single byte. Only a few (around 10?) of these the 256 byte values are free. The rest is already used for opcodes or pushing numeric constants into the stack.
How do we know that OP_CHECKDATASIG/VERIFY is the correct to use of this limited namespace?
What should we do when run out of free opcode values?
https://en.bitcoin.it/wiki/Script (This is Bitcoin Core, but mostly correct for BCH)
submitted by AhPh9U to btc [link] [comments]

Bitcoin Cash Opcodes vs Ethereum Smart Contracts

I was reading a bit about the advantages that Ethereum has compared to Bitcoin Cash and it seems to be mostly three things: Faster blocks, GPU friendly and Smart contracts.
1) Faster blocks. BCH creates a new block every ~10 min while ETH makes one every ~12 sec. This isn't a big deal to me as long as BCH keeps the irreversible nature of each transaction. I also imagine having not too many blocks per hour might make things simpler in the future if an insane amount of transactions happen every minute.
2) GPU friendly (or rather, hostile towards specialized hardware). This is a problem but unfortunately I don't think there's any going back now since miners have already invested in their specialized hardware and if BCH would change the algorithm, support of BCH would drop a lot. The one machine one vote vision is gone but it doesn't really break the whole system as long as there are many miners willing to invest in specialized hardware.
3) Smart contracts on the Blockchain. This seems to be the biggest plus vs BCH that Ethereum have today. You can put code into blocks and create mini-programs on their blockchain that will do things if you send ETH to that address (along with some "gas", which is a special ETH fee that enables the program to run X amount of cycles since the miner need to use some electricity to execute the code). The ETH sent to the address doesn't belong to any wallet, the code must pass it along to other addresses and can take conditions into account.
So, what I'm wondering is if the upcoming changes to Bitcoin Cash will be able to compete with these "contracts" that Ethereum has? On https://en.wikipedia.org/wiki/Smart_contract it says that they are "nearly Turing-complete", which is an impressive feature to have in your crypto.
In https://www.bitcoinabc.org/bitcoin-abc-medium-term-development (mirror: https://i.imgur.com/prvJ6jb.png ) it says that one of the next steps are: "Re-activate some deactivated Opcodes, and move toward adding protocol extension points to facilitate future Opcode upgrades"
Will Opcodes provide the same functionality that Ethereum Contract provides? Btw if I missed some big (or small) difference that Ethereum has compared to Bitcoin Cash, please post it! Right now it looks like BCH's biggest competitor to the crown of crypto is ETH, not BTC.
submitted by LaudedSwanSong to btc [link] [comments]

Steel Man Argument | Embracing the Lightning Network in Bitcoin Cash

Most proponents of Bitcoin Cash have some negative view of the Lightning Network, ranging from skepticism to outright hatred. There's reasons to be skeptical and in fact there may be reasons to be suspicious of the motives of the Lightning Network.
However, there might be reasons to say "Great, we (the bitcoin cash community) are entirely open minded to the Lightning Network working and we've implemented a malleability fix and/or whatever else might be needed to make BCH Lightning compatible.
If we do this, then we don't have to be as militantly opposed to Lightning, we can just argue on the basis that Lightning is an experiment and either:
What I love about the Bitcoin (cash) community is that we are very open minded. Changes like the OP_Group and opcode related changes are very exciting. I think we should work on embracing the LN as well.
If we do this, we are doing something similar to steel manning where we're presenting our "opponents" in the best possible light.
Just a thought. Curious what you guys think? Let's make Bitcoin succeed massively!
submitted by bitcoin_permabull to btc [link] [comments]

Had a quick (1,5 hours) look at Counterparty github code (master branch)

I spent 1,5 hours looking at their code on the github (master branch), and also read a bit of their documentation. Here is what I found:
So all in all, I do not see why there was such a fuss about Counterparty today - nothing major happened there. And, as mentioned before, having 10 mins block time for smart contracts could be quite inconvenient, and has a potential of reducing the applicability.
submitted by ledgerwatch to ethtrader [link] [comments]

hardfork etiquette: include replay protection

If you're going to HF, you ought to include replay protection. Is there anything unreasonable about this position?
Technically:
Perhaps we could add an operation to script, say OP_BLOCK_HASH, which takes two arguments, a block index B and a hash H, and returns true if on that chain block B had hash H? From then on transactions will be able to assert which chain they are on by referring to post-fork block hashes.
I've read that "New opcodes can be added by means of a carefully designed and executed softfork using OP_NOP1-OP_NOP10." ( https://en.bitcoin.it/wiki/Script )
Politically:
Perhaps economic nodes could take a stand (form a new agreement) by declaring that replay protection is good etiquette and therefore they won't adopts chains that lack it.
submitted by but_without_words to Bitcoin [link] [comments]

The missing explanation of Proof of Stake Version 3 - Article by earlz.net

The missing explanation of Proof of Stake Version 3

In every cryptocurrency there must be some consensus mechanism which keeps the entire distributed network in sync. When Bitcoin first came out, it introduced the Proof of Work (PoW) system. PoW is done by cryptographically hashing a piece of data (the block header) over and over. Because of how one-way hashing works. One tiny change in the data can cause an extremely different hash to come of it. Participants in the network determine if the PoW is valid complete by judging if the final hash meets a certain condition, called difficulty. The difficulty is an ever changing "target" which the hash must meet or exceed. Whenever the network is creating more blocks than scheduled, this target is changed automatically by the network so that the target becomes more and more difficult to meet. And thus, requires more and more computing power to find a hash that matches the target within the target time of 10 minutes.

Definitions

Some basic definitions might be unfamiliar to some people not familiar with the blockchain code, these are:

Proof of Work and Blockchain Consensus Systems

Proof of Work is a proven consensus mechanism that has made Bitcoin secure and trustworthy for 8 years now. However, it is not without it's fair share of problems. PoW's major drawbacks are:
  1. PoW wastes a lot of electricity, harming the environment.
  2. PoW benefits greatly from economies of scale, so it tends to benefit big players the most, rather than small participants in the network.
  3. PoW provides no incentive to use or keep the tokens.
  4. PoW has some centralization risks, because it tends to encourage miners to participate in the biggest mining pool (a group of miners who share the block reward), thus the biggest mining pool operator holds a lot of control over the network.
Proof of Stake was invented to solve many of these problems by allowing participants to create and mine new blocks (and thus also get a block reward), simply by holding onto coins in their wallet and allowing their wallet to do automatic "staking". Proof Of Stake was originally invented by Sunny King and implemented in Peercoin. It has since been improved and adapted by many other people. This includes "Proof of Stake Version 2" by Pavel Vasin, "Proof of Stake Velocity" by Larry Ren, and most recently CASPER by Vlad Zamfir, as well as countless other experiments and lesser known projects.
For Qtum we have decided to build upon "Proof of Stake Version 3", an improvement over version 2 that was also made by Pavel Vasin and implemented in the Blackcoin project. This version of PoS as implemented in Blackcoin is what we will be describing here. Some minor details of it has been modified in Qtum, but the core consensus model is identical.
For many community members and developers alike, proof of stake is a difficult topic, because there has been very little written on how it manages to accomplish keeping the network safe using only proof of ownership of tokens on the network. This blog post will go into fine detail about Proof of Stake Version 3 and how it's blocks are created, validated, and ultimately how a pure Proof of Stake blockchain is possible to secure. This will assume some technical knowledge, but I will try to explain things so that most of the knowledge can be gathered from context. You should at least be familiar with the concept of the UTXO-based blockchain.
Before we talk about PoS, it helps to understand how the much simpler PoW consensus mechanism works. It's mining process can be described in just a few lines of pseudo-code:
while(blockhash > difficulty) { block.nonce = block.nonce + 1 blockhash = sha256(sha256(block)) } 
A hash is a cryptographic algorithm which takes an arbritrary amount of input data, does a lot of processing of it, and outputs a fixed-size "digest" of that data. It is impossible to figure out the input data with just the digest. So, PoW tends to function like a lottery, where you find out if you won by creating the hash and checking it against the target, and you create another ticket by changing some piece of data in the block. In Bitcoin's case, nonce is used for this, as well as some other fields (usually called "extraNonce"). Once a blockhash is found which is less than the difficulty target, the block is valid, and can be broadcast to the rest of the distributed network. Miners will then see it and start building the next block on top of this block.

Proof of Stake's Protocol Structures and Rules

Now enter Proof of Stake. We have these goals for PoS:
  1. Impossible to counterfeit a block
  2. Big players do not get disproportionally bigger rewards
  3. More computing power is not useful for creating blocks
  4. No one member of the network can control the entire blockchain
The core concept of PoS is very similar to PoW, a lottery. However, the big difference is that it is not possible to "get more tickets" to the lottery by simply changing some data in the block. Instead of the "block hash" being the lottery ticket to judge against a target, PoS invents the notion of a "kernel hash".
The kernel hash is composed of several pieces of data that are not readily modifiable in the current block. And so, because the miners do not have an easy way to modify the kernel hash, they can not simply iterate through a large amount of hashes like in PoW.
Proof of Stake blocks add many additional consensus rules in order to realize it's goals. First, unlike in PoW, the coinbase transaction (the first transaction in the block) must be empty and reward 0 tokens. Instead, to reward stakers, there is a special "stake transaction" which must be the 2nd transaction in the block. A stake transaction is defined as any transaction that:
  1. Has at least 1 valid vin
  2. It's first vout must be an empty script
  3. It's second vout must not be empty
Furthermore, staking transactions must abide by these rules to be valid in a block:
  1. The second vout must be either a pubkey (not pubkeyhash!) script, or an OP_RETURN script that is unspendable (data-only) but stores data for a public key
  2. The timestamp in the transaction must be equal to the block timestamp
  3. the total output value of a stake transaction must be less than or equal to the total inputs plus the PoS block reward plus the block's total transaction fees. output <= (input + block_reward + tx_fees)
  4. The first spent vin's output must be confirmed by at least 500 blocks (in otherwords, the coins being spent must be at least 500 blocks old)
  5. Though more vins can used and spent in a staking transaction, the first vin is the only one used for consensus parameters.
These rules ensure that the stake transaction is easy to identify, and ensures that it gives enough info to the blockchain to validate the block. The empty vout method is not the only way staking transactions could have been identified, but this was the original design from Sunny King and has worked well enough.
Now that we understand what a staking transaction is, and what rules they must abide by, the next piece is to cover the rules for PoS blocks:
There are a lot of details here that we'll cover in a bit. The most important part that really makes PoS effective lies in the "kernel hash". The kernel hash is used similar to PoW (if hash meets difficulty, then block is valid). However, the kernel hash is not directly modifiable in the context of the current block. We will first cover exactly what goes into these structures and mechanisms, and later explain why this design is exactly this way, and what unexpected consequences can come from minor changes to it.

The Proof of Stake Kernel Hash

The kernel hash specifically consists of the following exact pieces of data (in order):
The stake modifier of a block is a hash of exactly:
The only way to change the current kernel hash (in order to mine a block), is thus to either change your "prevout", or to change the current block time.
A single wallet typically contains many UTXOs. The balance of the wallet is basically the total amount of all the UTXOs that can be spent by the wallet. This is of course the same in a PoS wallet. This is important though, because any output can be used for staking. One of these outputs are what can become the "prevout" in a staking transaction to form a valid PoS block.
Finally, there is one more aspect that is changed in the mining process of a PoS block. The difficulty is weighted against the number of coins in the staking transaction. The PoS difficulty ends up being twice as easy to achieve when staking 2 coins, compared to staking just 1 coin. If this were not the case, then it would encourage creating many tiny UTXOs for staking, which would bloat the size of the blockchain and ultimately cause the entire network to require more resources to maintain, as well as potentially compromise the blockchain's overall security.
So, if we were to show some pseudo-code for finding a valid kernel hash now, it would look like:
while(true){ foreach(utxo in wallet){ blockTime = currentTime - currentTime % 16 posDifficulty = difficulty * utxo.value hash = hash(previousStakeModifier << utxo.time << utxo.hash << utxo.n << blockTime) if(hash < posDifficulty){ done } } wait 16s -- wait 16 seconds, until the block time can be changed } 
This code isn't so easy to understand as our PoW example, so I'll attempt to explain it in plain english:
Do the following over and over for infinity: Calculate the blockTime to be the current time minus itself modulus 16 (modulus is like dividing by 16, but then only instead of taking the result, taking the remainder) Calculate the posDifficulty as the network difficulty, multiplied by the number of coins held by the UTXO. Cycle through each UTXO in the wallet. With each UTXO, calculate a SHA256 hash using the previous block's stake modifier, as well as some data from the the UTXO, and finally the blockTime. Compare this hash to the posDifficulty. If the hash is less than the posDifficulty, then the kernel hash is valid and you can create a new block. After going through all UTXOs, if no hash produced is less than the posDifficulty, then wait 16 seconds and do it all over again. 
Now that we have found a valid kernel hash using one of the UTXOs we can spend, we can create a staking transaction. This staking transaction will have 1 vin, which spends the UTXO we found that has a valid kernel hash. It will have (at least) 2 vouts. The first vout will be empty, identifying to the blockchain that it is a staking transaction. The second vout will either contain an OP_RETURN data transaction that contains a single public key, or it will contain a pay-to-pubkey script. The latter is usually used for simplicity, but using a data transaction for this allows for some advanced use cases (such as a separate block signing machine) without needlessly cluttering the UTXO set.
Finally, any transactions from the mempool are added to the block. The only thing left to do now is to create a signature, proving that we have approved the otherwise valid PoS block. The signature must use the public key that is encoded (either as pay-pubkey script, or as a data OP_RETURN script) in the second vout of the staking transaction. The actual data signed in the block hash. After the signature is applied, the block can be broadcast to the network. Nodes in the network will then validate the block and if it finds it valid and there is no better blockchain then it will accept it into it's own blockchain and broadcast the block to all the nodes it has connection to.
Now we have a fully functional and secure PoSv3 blockchain. PoSv3 is what we determined to be most resistant to attack while maintaining a pure decentralized consensus system (ie, without master nodes or currators). To understand why we approached this conclusion however, we must understand it's history.

PoSv3's History

Proof of Stake has a fairly long history. I won't cover every detail, but cover broadly what was changed between each version to arrive at PoSv3 for historical purposes:
PoSv1 - This version is implemented in Peercoin. It relied heavily on the notion of "coin age", or how long a UTXO has not been spent on the blockchain. It's implementation would basically make it so that the higher the coin age, the more the difficulty is reduced. This had the bad side-effect however of encouraging people to only open their wallet every month or longer for staking. Assuming the coins were all relatively old, they would almost instantaneously produce new staking blocks. This however makes double-spend attacks extremely easy to execute. Peercoin itself is not affected by this because it is a hybrid PoW and PoS blockchain, so the PoW blocks mitigated this effect.
PoSv2 - This version removes coin age completely from consensus, as well as using a completely different stake modifier mechanism from v1. The number of changes are too numerous to list here. All of this was done to remove coin age from consensus and make it a safe consensus mechanism without requiring a PoW/PoS hybrid blockchain to mitigate various attacks.
PoSv3 - PoSv3 is really more of an incremental improvement over PoSv2. In PoSv2 the stake modifier also included the previous block time. This was removed to prevent a "short-range" attack where it was possible to iteratively mine an alternative blockchain by iterating through previous block times. PoSv2 used block and transaction times to determine the age of a UTXO; this is not the same as coin age, but rather is the "minimum confirmations required" before a UTXO can be used for staking. This was changed to a much simpler mechanism where the age of a UTXO is determined by it's depth in the blockchain. This thus doesn't incentivize inaccurate timestamps to be used on the blockchain, and is also more immune to "timewarp" attacks. PoSv3 also added support for OP_RETURN coinstake transactions which allows for a vout to contain the public key for signing the block without requiring a full pay-to-pubkey script.

References:

  1. https://peercoin.net/assets/papepeercoin-paper.pdf
  2. https://blackcoin.co/blackcoin-pos-protocol-v2-whitepaper.pdf
  3. https://www.reddcoin.com/papers/PoSV.pdf
  4. https://blog.ethereum.org/2015/08/01/introducing-casper-friendly-ghost/
  5. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/kernel.h#L11
  6. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/main.cpp#L2032
  7. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/main.h#L279
  8. http://earlz.net/view/2017/07/27/1820/what-is-a-utxo-and-how-does-it
  9. https://en.bitcoin.it/wiki/Script#Obsolete_pay-to-pubkey_transaction
  10. https://en.bitcoin.it/wiki/Script#Standard_Transaction_to_Bitcoin_address_.28pay-to-pubkey-hash.29
  11. https://en.bitcoin.it/wiki/Script#Provably_Unspendable.2FPrunable_Outputs
Article by earlz.net
http://earlz.net/view/2017/07/27/1904/the-missing-explanation-of-proof-of-stake-version
submitted by B3TeC to Moin [link] [comments]

Qtum - Quantum Chain Design Document

Serialization: Qtum Foundation Design Document

Foreword
In this series of articles, the Qtum Quantum Chain Foundation will make public its early design documents for the first time, hoping to help the community understand the design intent of Qtum and the implementation details of key technologies. The article will be based on the original design draft in order to restore the designer's original ideas. Follow-up Qtum project team will be further collation and interpretation, to help readers understand more technical details, so stay tuned.
The topics that may be included in this series include
* Qtum account abstraction layer AAL
* Qtum distributed autonomous protocol DGP
* Qtum wallet (qt, mobile wallet, etc.) and browser
* Add RPC call
* Mutual interest consensus mechanism MPoS
* Add opcode
* Integration of EVM and Qtum blockchain
* Qtum x86 virtual machine
* Others...
The Qtum quantum chain public number will be updated from time to time around the above topics to restore the history of the Qtum project and key technologies from scratch.
Qtum original design document summary -- Qtum new OPCODE
As we all know, Qtum uses the same UTXO model as Bitcoin. The original UTXO script was not compatible with the EVM account model, so Qtum added three OP_CREATE, OP_CALL, and OP_SPEND opcodes to the UTXO transaction script for the purpose of providing operational support for conversions between UTXO and EVM account models. The original names of the three opcodes are OP_EXEC(OP_CREATE), OP_EXEC_ASSIGN(OP_CALL) and OP_TXHASH(OP_SPEND), respectively.
The following is an excerpt of representative original documents for interested readers.
OP_CREATE (or OP_EXEC**)**
OP_CREATE (or OP_EXEC) is used to create a smart contract. The original design files (with Chinese translation) related to this opcode by the Qtum development team are as follows (ps: QTUM <#> or QTUMCORE<#> in the document numbering internal design documents. ):
QTUMCORE-3:Add EVM and OP_CREATE for contract execution Description:After this story, the EVM should be integrated and a very basic contract should be capable of being executed. There will be a new opcode, OP_CREATE (formerly OP_EXEC), which takes 4 arguments, in push order: 1. VM version (currently 1 is EVM) 2. Gas price (not yet used, anything is valid) 3. Gas limit (not yet used, assume very high limit) 4. bytecodeFor now it is OK that this script format be forced and mandatory for OP_CREATE transactions on the blockchain. (ie, only "standard" allowed on the blockchain) When OP_CREATE is encountered, it should execute the EVM and persist the contract to a database (triedb) Note: Make sure to follow policy for external code (commit vanilla unmodified code first, and then change it as needed) Make the EVM test suite functional as well (someone else can setup continuous integration changes for it though) 
The above document describes the functions required by OP_CREATE and the parameters used.

OP_CALL (or OP_EXEC_ASSIGN)

OP_CALL is used for contract execution and is one of the most commonly used opcodes. There are many descriptions in the original design document.
QTUM6: Implement calling environment info in EVM for OP_EXEC_ASSIGN 
Description: Solidity expects certain information to be pushed onto the stack as part of it's ABI. So, when data is sent into the contract using OP_EXEC_ASSIGN we need to make sure to provide this data. This data includes the Solidity "function selector" as well as ensuring the opcodes CALLER and ORIGIN function properly. This looks to be fairly easy, it should just be transferring some data from the Bitcoin stack to the EVM stack, and setting some fields for the origin info. However, this story should be split into multiple tasks and re-evaluated if it isn't easy. See also: https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI For populating the CALLER and ORIGIN value, the following should be done: OP_EXEC_ASSIGN should take 2 extra arguments, SENDER and SENDER_SIGNATURE. Sender should be a public key. Sender Signature is the signature of all the vins for the current transaction, signed of course using the SENDER value.On the EVM side, CALLER's value will be a public key hash, ie, a hash of the SENDER public key. This public key hash should be compatible with Bitcoin's public key hash for it's standard version 1 addresses. IF the given SENDER_SIGNATURE does not match successfully, then the transaction should be considered invalid. If the SENDER public key is 0, then SENDER_SIGNATURE must also be 0, and the given CALLER opcode etc should just return 0.
The above document describes the OP_EXEC_ASSIGN calling environment information that needs to be implemented in the EVM.
QTUM8: Implement OP_EXEC_ASSIGN for sending money to contracts 
Description: A new opcode should be added, OP_EXEC_ASSIGN. This opcode should take these arguments in push order: # version number (VM version to use, currently just 1)

gas price (can be ignored for now)

gas refund script (can be ignored for now)

data (The data to hand to the smart contract. This will include things like the Solidity ABI Function Selector and other data that will later be available using the CALLERDATA EVM opcode) # smart contract address (txid + vout number)

It should return two values right now, 0 and 0. These are for spendable and out of gas, respectively. Making them spendable and dealing with out of gas will be in a future storyFor this story, the EVM contract does not actually need to be executed. This opcode should only be a placeholder so that the accounting system can determine how much money a contract has control of
The above document describes the OP_EXEC_ASSIGN implementation details.
QTUM15: Execute the relevant contract during OP_EXEC_ASSIGN 
Description: After this story is complete, when OP_EXEC_ASSIGN is reached, it should actually execute the contract whose address was given to it, passing the relevant data from the bitcoin script stack with it. Other data such as the caller and sender can be left for a later story. Making the CALLER, ORIGIN etc opcodes work properly will be fixed with a later story
The above document describes OP_EXEC_ASSIGN how the script runs the relevant contract code.
QTUM40: Allow contracts to send money to pubkeyhash addresses Description: We need to allow contracts to send money back to pubkeyhash addresses, so that people can withdraw their coins from contracts when allowed, etc. This should work similar to how version 0 contract sends work. Instead of using an OP_EXEC_ASSIGN vout though, we need to instead use a standard pubkeyhash script. So, upon spending to a pubkeyhash, the following transaction should be placed on the blockchain: vin: [standard contract OP_EXEC_ASSIGN inputs] ... vout: OP_DUP OP_HASH160 [pubKeyHash] OP_EQUALVERIFY OP_CHECKSIG change output - version 0 OP_EXEC_ASSIGN back to spending contract These outputs should be directly spendable in the wallet with no changes to the wallet code itself 
The above document describes how to allow contracts to send QTUM to pubkeyhash addresses.
QTUMCORE-10:Add ability for contracts to call other deployed contracts Description:Contracts should be capable of calling other contracts using a new opcode, OP_CALL. Arguments in push order:version (32 bit integer) gas price (64 bit integer) gas limit (64 bit integer) contract address (160 bits) data (any length) OP_CALL should ways return false for now. OP_CALL only results in contract execution when used in a vout; Similar to OP_CREATE, it uses the special rule to process the script during vout processing (rather than when spent as is normal in Bitcoin). Contract execution should only be triggered when the transaction script is in this standard format and has no extra opcodes. If OP_CALL is created that uses an invalid contract address, then no contract execution should take place. The transaction should still be valid in the blockchain however. If money was sent with OP_CALL, then that money (minus the gas fees) should result in a refund transaction to send the funds back to vin[0]'s vout script. The "sender" exposed to EVM should be the pubkeyhash spent by vin[0]. If the vout spent by vin[0] is not a pubkeyhash, then the sender should be 0.Funds can be sent to the contract using an OP_CALL vout. These funds will be handled by the account abstraciton layer in a different story, to expose this to the EVM. Multiple OP_CALLS can be used in a single transaction. However, before contract execution, the gas price and gas limit of each OP_CALL vout should be checked to ensure that the transaction provides enough transaction fees to cover the gas. Additionally, this should be verified even when the contract is not executed, such as when it is accepted in the mempool. 
The above document describes how the contract calls other contracts via OP_CALL.

OP_SPEND (or OP_TXHASH, OP_EXEC_SPEND)

OP_SPEND is used for the cost of the contract balance. Because the contract address is a special address, in order to ensure consensus, the UTXO needs to be specially processed. Therefore, there are more descriptions of the OP_SPEND operation code in the original design document.
QTUM20: Create OP_EXEC_SPEND transaction when a contract spends money 
Description: When a CALL opcode or similar to used from an EVM contract to send another contract money, this should be shown on the blockchain as a new transaction. When a money transfer is done in the contract, the miner should add a new transaction exactly after the currently processing transaction in the block. This transaction should spend an input owned by the contract by using EXEC_SPEND in it's redeemScript. For the purposes of this story, assume change is not something to be worried about and consume as many inputs are needed. Properly picking effecient coins and sending back money to the originating contract will come in a later story. Edge cases to watch for: The transaction for sending money to the contract must come directly after the executing transaction. The outputs should use a version-0 OP_EXEC_ASSIGN vout, so that if the transaction were received out of context, it would still mean to not execute the contract.
The above document describes the timing of creating a OP_SPEND transaction.
QTUM21: Create consensus-critical change and coin-picking algorithm for OP_EXEC_SPEND transactions Description: Building on #20, now a consensus-critical algorithm must be made that picks the most optimal outputs belonging to the contract, and spends them, and also makes a change output that returns the "change" from the transaction back to the contract. All outputs in this case should be using a version-0 OP_EXEC_ASSIGN, to avoid running into the limitation that prevents more than one (version 1) OP_EXEC_ASSIGN transaction from being in a single transaction. The transaction should have as many vins as needed, and exactly 2 vouts. The first vout to go to the target contract, and the second vout to send change back to the source contract. 
QTUM22: Disallow more than one EVM execution per transaction
Description: In order to avoid significant edge cases, for now, disallow more than one EVM execution to take place in a single transaction. This includes both deployment and fund assignment vouts. Instead, such things should be split into multiple transactions If two EVM executions are encountered, the transaction should be treated as completely invalid and not suitable for broadcast nor putting into a block
QTUM23: Add "version 0" OP_EXEC_ASSIGN, which does not execute EVM Description: To counteract problems from #22, we should allow OP_EXEC_ASSIGN to be used to fund a contract without the contract actually being executed. This will be used later for "change" outputs to (multiple) contracts. If the version number passed in for OP_EXEC_ASSIGN is 0, then the contract is not executed. Also, this is only valid if the data provided to OP_EXEC_ASSIGN is just a single byte "0". Multiple version-0 OP_EXEC_ASSIGN vouts should be valid in a transaction, or 1 non-version-0 OP_EXEC_ASSIGN (or an OP_EXEC deployment) and multiple version-0 OP_EXEC_ASSIGN vouts. This will be used for all money spending that is sent from a contract to another contract
The above three documents describe that if the consensus-associated coin-picking algorithm guarantees that the OP_SPEND opcode does not cause a consensus error, the correctness of the change is ensured. At the same time, it describes the situation where the contract does not need to be run and how it is handled.
QTUM34: Disallow OP_EXEC and OP_EXEC_ASSIGN from coinbase transactions Description: Because of problems with coinbase maturity and potential side effects from ordering of gas-refund scripts, it should not be legal for coinbase outputs to be anything which results in EVM execution or directly changing EVM account balances. This includes version 0 OP_EXEC_ASSIGN outputs. 
The above document stipulates that coinbase transactions should not include contract-related scripts.

Other related documents

In addition, there are some documents describing the infrastructure needed for the new operation code.
QTUMCORE-51:Formalize the version field for OP_CREATE and OP_CALL Description:In order to sustain future extensions to the protocol, we need to set some rules for how we will later upgrade and add new VMs by changing the "version" argument to OP_CREATE and OP_CALL. We need a definitive VM version format beyond our current "just increment when doing upgrades". This would allow us to more easily plan upgrades and soft-forks. Proposed fields: 
  1. VM Format (can be increased from 0 to extend this format in the future): 2 bits2. Root VM - The actual VM to use, such as EVM, Lua, JVM, etc: 6 bits
  2. VM Version - The version of the Root VM to use (for upgrading the root VM with backwards compatibility) - 8 bits
  3. Flag options - For flags to the VM execution and AAL: 16 bits Total: 32 bits (4 bytes). Size is important since it will be in every EXEC transaction Flag option bits that control contract creation: (only apply to OP_CREATE) • 0 (reserve) Fixed gas schedule - if true, then this contract chooses to opt-out of allowing different gas schedules. Using OP_CALL with a gas schedule other than the one specified in it's creation will result in an immediate exception and result in an out of gas refund condition • 1 (reserve) Enable contract admin interface (reserve only, this will be implemented later. Will allow contracts to control for themselves what VM versions and such they allow, and allows the values to be changed over the lifecycle of the contract) • 2 (reserve) Disallow version 0 funding - If true, this contract is not capable of receiving money through version 0 OP_CALL, other than as required for the account abstraction layer. • bits 3-15 available for future extensions Flag options that control contract calls: (only apply to OP_CALL) • (none yet) Flag options that control both contract calls and creation: • (none yet) These flags will be implemented in a later story Note that the version field now MUST be a 4 byte push. A standard EVM contract would now use the version number (in hex) "01 00 00 00" Consensus behavior: VM Format must be 0 to be valid in a block Root VM can be any value. 1 is EVM, 0 is no-exec. All other values result in no-exec (allowed, but the no execution, for easier soft-forks later) VM Version can be any value (soft-fork compatibility). If a new version is used than 0 (0 is initial release version), then it will execute using version 0 and ignore the value Flag options can be any value (soft-fork compatibility). (inactive flag fields are ignored) Standard mempool behavior: VM Format must be 0Root VM must be 0 or 1VM Version must be 0Flag options - all valid fields can be set. All fields that are not assigned must be set to 0Defaults for EVM: VM Format: 0Root VM: 1VM Version: 0Flags: 0
The above documents formally identified OP_CREATE and OP_CALL needed version information, paving the way for subsequent multi-virtual machine support for Qtum.
QTUMCORE-52:Contract Admin Interface Description:(note, this isn't a goal for mainnet, though it would be a nice feature to include) It should be possible to manage the lifecycle of a contract internally within the contract itself. Such variables and configuration values that might need to be changed over the course of a contract's lifecycle: • Allowable gas schedules 
• Allowable VM versions (ie, if a future VM version breaks this contract, don't allow it to be used, as well as deprecating past VM versions from being used to interact with this contract) • Creation flags (the version flags in OP_CREATE) All of these variables must be able to be controlled within the contract itself, using decentralized code. For instance, in a DAO scenario, it might be something that participants can vote on within the contract, and then the contract triggers the code that changes these parameters. In addition, a contract should be capable of detecting it's own settings throughout it's execution as well as when it is initially created. I propose implementing this interface as a special pre-compiled contract. For a contract ot interact with it, it would call it using the Solidity ABI like any other contract. Proposed ABI for the contract: • bytes[2048] GasSchedule(int n) • int GasScheduleCount() • int AddGasSchedule(bytes[2048] • bytes[32] AllowedVMVersions() • void SetAllowedVMVersions(bytes[32]) Alternative implementations: There could be a specific Solidity function which is called in order to validate that the contract should allow itself to be called in a particular manner: pragma solidity 0.4.0; contract BlockHashTest {function BlockHashTest() { }function ValidateGasSchedule(bytes32 addr) public returns (bool) {if(addr=="123454") { return true; //allow contract to run }return false; //do not allow contract to run}function ValidateVMVersion(byte version) public returns (bool){if(version >= 2 && version < 10) { return true; //allow to run on versions 2-9. Say for example 1 had a vulnerability in it, and 10 broke the contract }return false; } } In this way a contract is responsible for managing it's own state. The basic way it would work is that when a you use OP_CALL to call a contract, it would first execute these two functions (and their execution would be included in gas costs). If either function returns false, then it immediately triggers an out of gas condition and cancels execution. It's slightly complicated to manage the "ValidateVMVersion" callback however, because we must decide which VM version to use. A bad one could cause this function itself to misbeha`ve.```````
pragma solidity 0.4.0; contract BlockHashTest {function BlockHashTest() { }function ValidateGasSchedule(bytes32 addr) public returns (bool) {if(addr=="123454") { return true; //allow contract to run }return false; //do not allow contract to run}function ValidateVMVersion(byte version) public returns (bool){if(version >= 2 && version < 10) { return true; //allow to run on versions 2-9. Say for example 1 had a vulnerability in it, and 10 broke the contract }return false; }
} 
The above document describes the management interface of the contract, and yes the contract can manage its own status.
QTUMCORE-53:Add opt-out flags to contracts for version 0 sends Description:Some contracts may wish to opt-out of certain features in Qtum that are not present in Ethereum. This way more Ethereum contracts can be ported to Qtum without worrying about new features in the Qtum blockchain Two flag options should be added to the Version field, which only have an effect in OP_CREATE for creating the contract: 2. (1st bit) Disallow "version 0" OP_CALLs to this contract outside of the AAL. (DisallowVersion0)  If this is enabled, then an OP_CALL using "root VM 0" (which causes no execution) is not allowed to be sent to this contract. If money is attempted to be sent to this contract using "version 0" OP_CALL, then it will result in an out of gas exception and the funds should be refunded. Version 0 payments made internally within the Account Abstraction Layer should not be affected by this flag. Along with these new consensus rules, there should also be some standard mempool checks: 
  1. If an OP_CALL tx uses a different gas schedule than the contract creation, and the disallow dynamic gas flag is set, then the transaction should be rejected from the mempool as a non-standard transaction (version 0 payments should not be allowed as standard transactions in the mempool anyway)
The above document describes how to get better EVM compatibility by ignoring certain Qtum specific features in order to port Ethereum contract code. This makes smart contracts in the Ethereum ecosystem more easily compatible with Qtum.

summary

The Qtum original design document presented above describes Qtu's increased opcode associated with the contract run, laying the groundwork for subsequent Qtum's EVM VMs that are compatible with the account model on top of the UTXO model, and also making the account abstraction layer AAL possible. The subsequent Qtum project team will further interpret the key documents. If you have any questions, readers can post comments in the comments area or contact the Qtum project team .
The Qtum quantum chain public number will be updated from time to time around the above topics to restore the history of the Qtum project and key technologies from scratch .
Please note that based on Patrick Dai's translation request, the content in this material is translated to English and published on Reddit.
OP's Qtum Address: QMmYAMEFgvPJGwK9nrwqYw1DHhBkiuEi78
submitted by szhman to Qtum [link] [comments]

Sergio Demian Lerner wants to add Turing-completeness to Bitcoin. This is a bad idea. Satoshi deliberately *omitted* Turing-completeness from Bitcoin - because so much expressiveness could be *dangerous*.

https://twitter.com/sdlernestatus/699714323619450881
https://np.reddit.com/Bitcoin/comments/464ycn/sergio_lerner_i_think_i_will_start_working_on_a/
Satoshi deliberately omitted Turing-completeness from Bitcoin - because so much expressiveness could be dangerous.
So, why does Sergio Demian Lerner want to add Turing-completeness to Bitcoin?
Is he not aware that most people consider Turing-completeness to be dangerous for Bitcoin?
If Sergio Demian Lerner wants to play around with adding Turing-completeness, he should do this with an alt-coin. Bitcoin is not Turing-complete for a reason: to guarantee Bitcoin's safety.
References:
https://bitcoin.stackexchange.com/questions/17258/turing-completeness-of-bitcoin-script
If scripts were Turing-complete, you could construct a fairly short script that took an extremely long time to run (a la the Busy Beaver) or contained an infinite loop. This would tend to result in a denial of service against everyone on the network, when they tried to verify the transaction.
– Nate Eldredge
https://bitcoin.stackexchange.com/questions/25427/the-bitcoin-scripting-system-is-purposefully-not-turing-complete-why
Can somebody explain to me why the Bitcoin scripting system is purposefully not Turing-complete? To make malicious programs difficult to develop (I guess)? Or because it was difficult to make it Turing-complete?
Bitcoin uses a scripting system for transactions. Forth-like, Script is simple, stack-based, and processed from left to right. It is purposefully not Turing-complete, with no loops.
Retrieved from: https://en.bitcoin.it/wiki/Script
– Murch
As others have said, there is no real need for Bitcoin scripting to be more complex than it is, as its complexity is more than enough for its intended applications; but the main reason is that not allowing some features (such as loops) in a language makes it completely deterministic: you can know for sure when and how a given program will end; you can't f.e. have infinite loops if you don't have loops in the first place, thus you don't have to worry about programs getting stuck and blocking/crashing the interpreter which is running them (in this case, the main Bitcoin software).
Not having to deal with the halting problem is definitely a plus for a tiny, embedded, purpose-specific language such as the one used for Bitcoin scripts.
– Massimo
It's easier to meter and restrict if it's not Turing complete, remembering that every node in the network needs to execute every script to ensure validity, we want it to be lightweight. It's not like it needs to be any more complex, nobody uses what we have to do anything interesting. Most of the opcodes are completely disabled and there's been no requests for them to be re-enabled.
There's so little use of script that I have manually inspected every single instance of a non-standandard transaction to see what they do. Other than the hash collision competitions and a lot of broken p2pool outputs, nobody to date has done anything even approaching interesting.
In other words, it's not complex because it doesn't need to be.
– goatse
https://bitcointalk.org/index.php?topic=431513.0
Satoshi probably left it out [Turing-completeness] because making a Turing-complete transaction scripting language safe is more difficult. You have to prevent scripts from running endlessly while also allowing them to run long enough to be useful, and you can't let them access too much external data or they might become invalid after being valid for a while and really screw things up.
– theymos
A full turing-complete scripting system seems like a pretty dangerous idea to me
– bitfreak!
Yes, there are many reasons this [Turing-completeness] does not exist in bitcoin.
say you want to run arbitrary source code on a p2p node to make possible "smart contracts". how do you know the source is not going to root your operating system? to understand that you have to know how easy and quick introducing backdoors is, in terms of computational complexity. in most cases you have program flow, create some kind of jump, and emulate further normal program flow. usually you have to be quite clever, as Operating System/application developers battle hackers all the time and there is a long list of vulnerabilities. it takes only one bug to introduce a hole. vulnerabilities can be a combination of software and configurations.
writing source code which can predict if source code does what is supposed to do is largely impossible
– coinrevo
submitted by ydtm to btc [link] [comments]

THE MOST PROFITABLE BITCOIN INDICATOR Bitcoin Halving PRICE PREDICTION in 2020! Bitcoin Evolution Review – Scam or Legit? Building on Bitcoin - Working with scripts with logical opcodes BITCOIN BREAKOUT IMMINENT!!!  $7,000,000,000,000 PRINTED BY THE FED!! BRRRRR

In addition to changing the semantics of a number of opcodes, there are also some changes to the resource limitations: Script size limit The maximum script size of 10000 bytes does not apply. Their size is only implicitly bounded by the block weight limit. Non-push opcodes limit The maximum non-push opcodes limit of 201 per script does not apply. Opcodes. This is a list of all Script words, also known as opcodes, commands, or functions. There are some words which existed in very early versions of Bitcoin but were removed out of concern that the client might have a bug in their implementation. Bitcoin operates on a fixed ruleset. So-called consensus rules include things such as the operation of the opcodes in Bitcoin Script, the rate at which new bitcoins are issued, the mathematical function used to calculate the target for the Difficulty algorithm and more. The protocol is agreed upon by the miners who control network operation. Opcodes used in Bitcoin Script This is a list of all Script words, also known as opcodes, commands, or functions. OP_NOP1-OP_NOP10 were originally set aside to be used when HASH and other security functions become insecure due to improvements in computing. A full list of opcodes can be found on Bitcoin Wiki. Turing Complete Programming With Antara Framework. As you can see with the numerous programming languages used above, support for blockchain programming has advanced significantly since the initial launch of Bitcoin Core in 2009. Even Script has had numerous improvements.

[index] [17574] [10842] [16803] [534] [21393] [28541] [13790] [2054] [25743] [22994]

THE MOST PROFITABLE BITCOIN INDICATOR

Bitcoin is a cryptocurrency and a digital payment system invented by an unknown programmer or a group of programmers under the name Satoshi Nakamoto It was released as open source software in The ... I will be going through the entire bitcoin developer reference and in each tutorial I will explain one or more key concepts involving bitcoin and altcoin developing. A simple yet full explanation of how the Script language in Bitcoin works. Includes examples of the most commonly used locking scripts (and unlocking scripts... Crypto Expert Predicts Bitcoin Will Hit 100k - Robert Kiyosaki & Anthony Pompliano - Duration: 32:25. The Rich Dad Channel 272,610 views. New; 32:25. What is Ethereum? The Of Bitcoin - Wikipedia Here are three actions to help you get started utilizing Bitcoin Money right now: A Bitcoin wallet is an app or program that enables you send and receive BCH. Wallets ...

Flag Counter