- It seems like smart contracts and algorithms are viewed by those in the legal profession as similar to contracts. It’s probably not entirely apples to apples, of course, although contracts in the legal sense (to my understanding) are for specifying expected behavior, and this is seen as similar to the way code enacts behavior.
- DAOs, Democracy and Governance, by Ralph Merkle http://merkle.com/papers/DAOdemocracyDraft.pdf
- In addition to purely decentralized architectures, there are some architectures that leverage desirable properties of different systems. The Blockstack storage layer, for instance, stores a mapping of content hashes to URLs, the content of which can reside in cloud storage platforms. Storing data in a highly available and resilient way (without cryptoeconomics) is a strength of established cloud providers, while the parts that require or benefit from decentralization / “immutability” are things like auditable proofs that X event occurred or Y state. [10/2/2018: I randomly saw this video from “Chainkit” in an email today https://www.youtube.com/watch?v=bG5FBbSe4s8 ← it looks to be a cool way to integrate blockchain (and its properties) into existing applications… disclaimer: I only watched the youtube video]
- Storage in IPFS seems something like blob storage, storage in Ethereum seems somewhat like the Smalltalk VM, rather than a database (snapshots of VM state)
- Application logic being public but transactions private is an inversion of today’s applications and services, from the vantage point of the provider. Facebook or Gmail application logic is not viewable by the public, but the transactions(sending emails, creating posts and events, uploading photos) are viewable by their organization. This may have implications for transparency, where people would like to know how things are operating under the hood, so to speak, while maintaining privacy for individual transactions.
- I sometimes hear or see blockchains referred to as databases, although I think it’s important to note that the term “database” nowadays refers to a wide range of systems for data storage. NoSQL databases and datastores evolved in environments with different access patterns (optimizing for read/write throughput: a firehose of writes followed by reads for batch processing) than RDBMS (transaction processing, where MVCC and data integrity are paramount to supporting the application built on top of the database). As a datastore, a permissionless distributed ledger has yet different properties than either. < should expand upon >
- Some services already have decentralized incarnations. For instance, multicasting/broadcasting data (an update file for a piece of software, filesharing via BitTorrent, etc.) exists side-by-side today with file hosting on S3/Dropbox/etc. In fact, the IPFS paper cites BitTorrent, Git, and other systems as inspirations for the design (https://ipfs.io/ipfs/QmR7GSQM93Cx5eAg6a6yRzNde1FQv7uL6X1o4k7zrJa3LX/ipfs.draft3.pdf)
- Something like IPFS could be comparable to current day cloud storage solutions. On the other hand, something like large scale computation seems to me like a less general use case. With something like Golem, or Zennet: who are the target users that require large scale computational capacity? For specialist customers who regularly do (such as data scientists) what would be reasons for them to use a decentralized solution over say, a cloud service such as EMR? When there is a cloud analogue to a decentralized service, how their properties come about can be looked at. For instance, information security and QoS in S3 (at a high level) comes from the data residing in data centers with backup power supplies, connected together with dedicated lines, and probably replicated across data centers. The same in IPFS is implemented using cryptography, replication, etc < to expand upon >. Are these equivalent for various classes of use cases? Where is the overlap and non-overlap?
A centralized, global, fault tolerant, highly available storage system such as S3 is also comprised of many ‘administrative’ components: redistribution of data to match changes in network topology or customer demand, for instance, which in a decentralized system would be achieved by an incentive system.
Whether these cryptoeconomic systems, and interactions between hierarchies and networks of those incentive systems can achieve the same service level as the processes in a centralized system, is likely crucial to whether a decentralized system can operate in a sustainable, steady state manner.
Conversely, this could imply that the niches in which decentralized systems have the most leverage would be ones in which such equilibria can be achieved and maintained.
- With offchain compute, such as TrueBit in the Ethereum ecosystem... although the blockchain could theoretically be used for a lot of things, it looks to be specialized for storing state that needs to be synchronized among nodes. Productization (which allows generalists access to specialist capabilities) seems to be moving up the stack: from infrastructure, to serverless, to cloud providers even offering AI/ML solutions.
- Can the analogy between “web 3.0” and the early internet be taken too far? With scalability, for instance, current blockchains look like they lie on a spectrum of design tradeoffs between security and throughput. So when future improvements in the technology are referred to, it may not be apples to apples with say, how the technology to scale web applications and services has been developed <more research needed… besides Ethereum scaling initiatives, also reference blockchains designed for high scalability, such as Kadena and Dfinity>. To address current technical limitations of blockchain, I have heard something to the effect “we expect that technology improvements will go hand-in-hand”… sometimes this is true, sometimes not. For example, along with Moore’s law there is a technical roadmap called the International Semiconductor Technology Roadmap, which attempts to predict advances in the technology with a breakdown of subareas. In the mid-2000s, one of the areas that was predicted to make chips faster was improvements in interconnect conductivity; at the time copper was the lowest impedance material for interconnects, and to fit the march of Moore’s law, it was thought that a more conductive material might be found. Such a material was not found, although improvements in other subareas of semiconductor technology allowed Moore’s law to march forward.
- State channels (as a general concept) look like a good way to blockchain-enable applications and services. The vendor or application developer can operate the state channel as a delineation or bulkhead between their system/network/service and the public chain.
- It’ll be cool to see what kinds of runtimes get implemented. Currently, an EVM language such as Solidity is… (deep breath) a language that is compiled into bytecode which is run on an interpreter written in a language interpreted by a language that compiles into an assembly language that feeds into the CPU. Although, just like with scripting languages, it depends on what the language is being used for (and just like how some systems written in python may just be glue over bindings to C libraries, some blockchain code could be glue between traditional transactional software systems).
- A neat piece of tooling I heard about at a talk on formal verification is a “QVM,” or quality virtual machine. It is a layer that sits under the VM and inspects running code to see that it adheres to a ruleset (something like asserts executed at runtime), which could be contract specific or generic to a domain (“Effectively Callback Free” was a property given as an example to inspect for reentrancy bugs — IE The DAO attack). This is like the code equivalent of an application-layer firewall, very cool.