Skip to content
This repository has been archived by the owner on Dec 14, 2020. It is now read-only.

Quick Start v1.3

jcs47 edited this page Oct 16, 2018 · 1 revision

You can quickly launch a HLF v1.3 network comprised of 4 ordering nodes, 2 frontends, and 2 peers by following the steps described bellow. These following instructions - as well as this README in general - assume you have entry-level knowledge of both docker and HLF. This setup uses the default 'SampleOrg' organization across all parties.

1. Create a new docker network named bft_network

  • In the case of a local deployment where all principals execute within the same host, create the network with docker's standard bridge driver with the command docker network create -d bridge bft_network

  • If instead you intend to create a true distributed deployment, the most straight-forward way is to use the swarm driver. From the collection of hosts you intend to use for the deployment, pick one to be the swarm manager. Assuming that the IP address for that host is 192.168.1.1, initialize the Docker daemon as a swarm manager as follows:

$ docker swarm init --advertise-addr 192.168.1.1

Swarm initialized: current node (9m5z41qtktd46d5uqs1da50pc) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4kstilp413po8qqfxod33ig1ydxhfv4rwu3zhh7pf28wlt6h3e-88cf7j0aiuv1xzuusl7ipdt0f 192.168.2.23:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

You can now create an overlay network by using the following command also at the manager:

$ docker network create -d overlay --attachable bft-network

Finally, at every other host other than the manager, execute the docker swarm join command with the parameters that were given in the output of docker swarm init. After that, you should have the network available and ready for the next steps.

2. Download the ordering service's images and create containers

Execute the commands bellow in the same order (each one from a different terminal):

$ docker run -i -t --rm --network=bft_network --name=bft.node.0 bftsmart/fabric-orderingnode:amd64-1.3.0 0
$ docker run -i -t --rm --network=bft_network --name=bft.node.1 bftsmart/fabric-orderingnode:amd64-1.3.0 1
$ docker run -i -t --rm --network=bft_network --name=bft.node.2 bftsmart/fabric-orderingnode:amd64-1.3.0 2
$ docker run -i -t --rm --network=bft_network --name=bft.node.3 bftsmart/fabric-orderingnode:amd64-1.3.0 3

Ordering nodes need to be started from the one with the lowest ID to the one with the highest. Once all ordering nodes have outputed -- Ready to process operations, the frontends can also start:

$ docker run -i -t --rm --network=bft_network --name=bft.frontend.1000 bftsmart/fabric-frontend:amd64-1.3.0 1000
$ docker run -i -t --rm --network=bft_network --name=bft.frontend.2000 bftsmart/fabric-frontend:amd64-1.3.0 2000

3. Start the peers

At this juncture, we can use the official peer image provided by the Hyperledger project. Moreover, we assume that the docker daemon has its UNIX socket available at /var/run/docker.socket, so we will mount a volume in the container at /var/run/ to give it access to the daemon. This is necessary because peers will perform chaincode execution by creating their own containers to execute their instantiated chaincodes.

  • If you have created a network with the bridge driver, this commands suffices:
$ docker run -i -t --rm --network=bft_network -v /var/run/:/var/run/ --name=bft.peer.0 hyperledger/fabric-peer:amd64-1.3.0
$ docker run -i -t --rm --network=bft_network -v /var/run/:/var/run/ --name=bft.peer.1 hyperledger/fabric-peer:amd64-1.3.0
  • If instead you created a swarm network, we first need to deal with an idiosyncrasy that manifests when using the swarm driver with a peer container. If we used the command above, the peer would be prone to block/timeout its execution when eventually a client tries to instantiate some chaincode. The way we found to avoid this issue, is to first connect the peer's container with the bridge driver and next with the swarm driver:
#Start bft.peer.0
$ docker create -i -t --rm --network=bridge -v /var/run/:/var/run/ --name=bft.peer.0 hyperledger/fabric-peer:amd64-1.3.0
$ docker network connect bft_network bft.peer.0
$ docker start -a bft.peer.0

#Start bft.peer.1
$ docker create -i -t --rm --network=bridge -v /var/run/:/var/run/ --name=bft.peer.1 hyperledger/fabric-peer:amd64-1.3.0
$ docker network connect bft_network bft.peer.1
$ docker start -a bft.peer.1

4. Create the clients

We will create 2 clients, each one configured to contact a different peer as its endpoint. To do this, we will instead use the image from our own repository:

$ docker run -i -t --rm --network=bft_network --name=bft.cli.0 -e CORE_PEER_ADDRESS=bft.peer.0:7051 bftsmart/fabric-tools:amd64-1.3.0
$ docker run -i -t --rm --network=bft_network --name=bft.cli.1 -e CORE_PEER_ADDRESS=bft.peer.1:7051 bftsmart/fabric-tools:amd64-1.3.0

You can also use the official client image (hyperledger/fabric-tools:amd64-1.3.0), but the one provided by us is already configured for this demontration. More importantly, you will also need to use the configtxgen tool provided with this image if you decide to setup another network different than the one configured in our images.

You have now the whole network booted up, using the SampleOrg organization for both clients, peers, and the ordering service.

5. Create the artifacts

Switch to the terminal where you launched bft.cli.0. You should have access to the container's command line. The rest of the commands should be issued from within it. Generate the transactions to create a new channel named "channel47" and to update its anchor peers as follows:

$ configtxgen -profile SampleSingleMSPChannel -outputCreateChannelTx channel.tx -channelID channel47
$ configtxgen -profile SampleSingleMSPChannel -outputAnchorPeersUpdate anchor.tx -channelID channel47 -asOrg SampleOrg

Notice we are not generating the genesis block for the system channel because the images already come with one generated. The name of the system channel is "bftchannel".

6. Create and join a new channel

Send the transactions to the orderng service by contacting the frontend:

$ peer channel create -o bft.frontend.1000:7050 -c channel47 -f channel.tx 
$ peer channel update -o bft.frontend.1000:7050 -c channel47 -f anchor.tx

Bear in mind that you should only supply an entrypoint for frontends, not ordering nodes. Ordering nodes only receive transactions from the frontends and send the assembled blocks back to them.

You should now have a file named "channel47.block" in your current directory of the container. You can use it to make the peer join the channel as follows:

$ peer channel join -b channel47.block

Once the peer receives the blockchain for the channel, you may notice the following output at its terminal:

2018-xx-xx xx:xx:xx.xxx UTC [protoutils] ValidateTransaction -> ERRO 083 validateCommonHeader returns err invalid nonce specified in the header
2018-xx-xx xx:xx:xx.xxx UTC [committer/txvalidator] validateTx -> ERRO 084 Invalid transaction with index 0
2018-xx-xx xx:xx:xx.xxx UTC [valimpl] preprocessProtoBlock -> WARN 087 Channel [channel47]: Block [1] Transaction index [0] TxId [] marked as invalid by committer. Reason code [BAD_COMMON_HEADER]

This happens because the peer is trying to validate the signature of configuration envelopes created by the ordering nodes, but such signature is not included. This is due to the fact that the envelope structure is generated at correct ordering nodes, but since the structure only supports a single signature, they cannot be included. However, since this verification is done for audit purposes and the blocks are still correctly signed, they are still appended to the chain. Moreover, this scenario only occurs for reconfiguration transactions, since the ordering nodes need to create a new configuration tree, as well as generate a new envelope containing the updated tree. Since regular transactions do not require the ordering nodes to create their own envelopes, this error does not occur for regular transactions. Finally, if you want to prevent this error from occurring, you can use our own peer image available at bftsmart/fabric-peer:amd64-1.3.0, which avoids performing signature verification for configuration envelopes.

7. Install and instantiate chaincode

This container includes the official examples available at the officiak HLF source code. Install and instantiate the following example chaincode:

$ peer chaincode install -n example02 -v 1.3 -p github.com/hyperledger/fabric/examples/chaincode/go/example02/cmd
$ peer chaincode instantiate -o bft.frontend.1000:7050 -C channel47 -n example02 -v 1.3 -c '{"Args":["init","a","100","b","200"]}'

You may notice that chaincode instantiate takes a few seconds to complete without presenting any output. This is because the peer is downloading and installing the docker images for the chaincode execution environment.

8. Issue invocations and queries

You can now perform queries and invocations to the chaincode:

$ peer chaincode query -C channel47 -n example02 -c '{"Args":["query","a"]}'

2018-xx-xx xx:xx:xx.xx UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
2018-xx-xx xx:xx:xx.xx UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
Query Result: 100

$ peer chaincode invoke -C channel47 -n example02 -c '{"Args":["invoke","a","b","10"]}'

2018-xx-xx xx:xx:xx.xx UTC [chaincodeCmd] InitCmdFactory -> INFO 001 Get chain(channel47) orderer endpoint: 172.17.0.6:7050
2018-xx-xx xx:xx:xx.xx UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default escc
2018-xx-xx xx:xx:xx.xx UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default vscc
2018-xx-xx xx:xx:xx.xx UTC [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 004 Chaincode invoke successful. result: status:200 
2018-xx-xx xx:xx:xx.xx UTC [main] main -> INFO 005 Exiting.....

$ peer chaincode query -C channel47 -n example02 -c '{"Args":["query","a"]}'

2018-xx-xx xx:xx:xx.xx UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
2018-xx-xx xx:xx:xx.xx UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
Query Result: 90

Alternatively to typing all these commands manually, you can simply execute invoke_demo.sh anywhere within bft.cli.0's container.

9. Observe the result of the invocation at bft.cli.1

You can now see how bft.cli.1 is affected by the invocation issued by bft.cli.0. To do so, fetch the genesis block for channel47:

$ peer channel fetch 0 ./channel47.block -c channel47 -o bft.frontend.2000:7050

Notice that for this client, we decided to contact bft.frontend.2000 instead of bft.frontend.1000 to fetch the genesis block.

You can now make peer bft.peer.1 join channel47:

$ peer channel join -b channel47.block

After the peer as fetched the ledger from one of the frontends, you can install the chaincode:

$ peer chaincode install -n example02 -v 1.3 -p github.com/hyperledger/fabric/examples/chaincode/go/example02/cmd

Notice that since the chaincode is already instantiated at channel47, we do not need to explicitly instantiate it again.

You can now observe the effects of the invocation issue at bft.cli.0:

$ peer chaincode query -C channel47 -n example02 -c '{"Args":["query","a"]}'

2018-xx-xx xx:xx:xx.xx UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
2018-xx-xx xx:xx:xx.xx UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
Query Result: 90

Alternatively to typing all these commands manually, you can simply execute query_demo.sh anywhere within bft.cli.1's container.

10. Generate workload

You can also use special test clients to issue workload into the ordering service. To create a client that receives blocks from channel47:

$ deliver_stdout --server bft.frontend.1000:7050 --channelID channel47

To begin introducing workload, start a new container for a second client, and type the following command:

$ broadcast_msg --server bft.frontend.1000:7050 --channelID channel47 --size <size of each transaction> --messages <number of transactions to send>

Bear in mind that the "transactions" issued by this client are not valid chaincode invocations. They are just random payload meant to generate workload.

You can also create new channels as follows:

$ broadcast_config --server bft.frontend.1000:7050 --cmd newChain --chainID <channel ID>

Alternatively, you can generate workload by using another special program that emulates the Java component of a frontend that injects workload directly to the ordering nodes:

$ startWorkload.sh <frontend ID> <channel ID> <num workers> <payload size> <txs per worker> <random|unsigned|signed> <delay (ms)>

11. Recovery and reconfiguration

Each ordering node can be re-started after a crash, as long as the total number of simultaneously crashed nodes do not exceed f. Furthermore, it is also possible to change the set of ordering nodes on-the-fly via BFT-SMaRt's reconfiguration protocol. In order to add a node to the group, start it as you would any other node. To make that node join the existing group, use the reconfigure.sh script from within a bftsmart/fabric-tools container as follows:

$ reconfigure.sh <node id> <ip address> <port>

In order to remove a node from the group, use the same script specifying only the node id. Bear in mind that when doing this in a distributed setting, it is necessary to copy the ./hyperledger-bftsmart/config/currentViewfile into the containers that are about to join the group before anything else is done. This is because this file specifies the set of nodes that comprise the most up-to-date group. You must also make sure that the bftsmart/fabric-tools container is also given this file.

Finally, bear in mind that for the moment, recovering and reconfiguration of frontends is not supported.