Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shashank #1

Merged
merged 41 commits into from
Oct 31, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
0571e48
fix: calibration support (testing in progress)
xBalbinus Aug 24, 2023
4e9d9ed
enhancement: README rework
xBalbinus Aug 25, 2023
6ea946e
fix: update version on lighthouse
xBalbinus Aug 25, 2023
e8d8db9
fix: deal status querying
xBalbinus Aug 25, 2023
72781ad
fixes: calibration support
xBalbinus Aug 28, 2023
5da1bfc
fixes calibration support
xBalbinus Aug 28, 2023
62adee2
calibration node service patches
xBalbinus Aug 28, 2023
776ff35
patches to various jobs execution
xBalbinus Aug 28, 2023
cd012bd
patches to various jobs execution
xBalbinus Aug 28, 2023
e3e2fe0
Merge pull request #12 from filecoin-project/xiangan/calibration
xBalbinus Aug 29, 2023
6a21f2b
fix: correction to lighthouse aggregator
xBalbinus Aug 31, 2023
24f66cf
emergency fix: epochs, multi-job submission
xBalbinus Aug 31, 2023
5b8cb7e
fix: default epoch value
xBalbinus Aug 31, 2023
dfefffe
feat: cid list
xBalbinus Sep 1, 2023
4b8b957
adding cid list function to interface
xBalbinus Sep 1, 2023
52250cb
Merge pull request #13 from filecoin-project/xiangan/cidlist
xBalbinus Sep 1, 2023
c4f84b7
remove json.stringify in aggregator
xBalbinus Sep 1, 2023
2f09820
add flatten task
xBalbinus Sep 15, 2023
c752107
deduplication of dependencies
xBalbinus Sep 16, 2023
032de99
temp fix: expiring deals
xBalbinus Sep 28, 2023
d4b68bd
fix: change block.timestamp --> block.number
xBalbinus Sep 28, 2023
49f344c
fix mock along with main
xBalbinus Sep 28, 2023
537e8fb
Merge pull request #1 from filecoin-project/main
ravish1729 Oct 11, 2023
4f04845
file upload replaced with cid pinning
ravish1729 Oct 11, 2023
8a58a78
LIGHTHOUSE_PIN_ENDPOINT added
ravish1729 Oct 12, 2023
c553518
Merge pull request #2 from lighthouse-web3/staging
ravish1729 Oct 12, 2023
9aaa2c4
fixed getAllCIDs func
Oct 12, 2023
0e9967b
added test-getAllCIDs
Oct 12, 2023
340f699
Formatted DealStatus
parva-jain Oct 12, 2023
43e3a9b
Formatted DealStatusMock contract
parva-jain Oct 12, 2023
a7e69ab
Formatted DealStatusMock.sol
parva-jain Oct 12, 2023
1364215
Formatted DealStatus.js
parva-jain Oct 12, 2023
d3247c3
Merge pull request #3 from lighthouse-web3/staging
ravish1729 Oct 12, 2023
15a639c
added cors library
ravish1729 Oct 12, 2023
fa49b5b
cors added
ravish1729 Oct 12, 2023
3727792
Merge pull request #4 from lighthouse-web3/staging
ravish1729 Oct 12, 2023
2f80e9a
added submit_raas func
Oct 18, 2023
4cebe7b
submit_raas func test
Oct 18, 2023
3b1bd2c
Merge pull request #5 from lighthouse-web3/submit-RaaS
ravish1729 Oct 18, 2023
63d1ae6
submitRaas param changes
Oct 19, 2023
5762b54
Merge pull request #6 from lighthouse-web3/submit-RaaS
ravish1729 Oct 19, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,17 +1,20 @@
PRIVATE_KEY="abc123abc123abc123abc123abc123abc123abc123abc123abc123abc123abc1"
# Acquired via. `curl --location --request GET 'https://auth.estuary.tech/register-new-token'`
EDGE_API_KEY="EST5c1d961b-5916-4c23-a17d-889ebffa84c6ARY"
LIGHTHOUSE_API_KEY="ac89924f.928bfddf2e214309ba669397d6b66664"
LIGHTHOUSE_API_KEY="673212f3.6c46a171d620450999e78b9d1e3b77f7"
MINER="t017840"

# 'calibration' or 'mainnet'
NETWORK="calibration"

# API ENDPOINTS: EDGE
EDGE_DEAL_INFOS_ENDPOINT="https://hackfs-coeus.estuary.tech/edge/open/status/content/"
EDGE_DEAL_UPLOAD_ENDPOINT="https://hackfs-coeus.estuary.tech/edge/api/v1/content/add"
EDGE_DEAL_DOWNLOAD_ENDPOINT="https://hackfs-coeus.estuary.tech/edge/gw/"

# API ENDPOINTS: LIGHTHOUSE
LIGHTHOUSE_DEAL_DOWNLOAD_ENDPOINT="https://gateway.lighthouse.storage/ipfs/"
LIGHTHOUSE_DEAL_INFOS_ENDPOINT="https://api.lighthouse.storage/api/lighthouse/get_proof\?cid\="
LIGHTHOUSE_DEAL_INFOS_ENDPOINT="https://api.lighthouse.storage/api/lighthouse/get_proof"

# API ENDPOINTS: LOTUS
LOTUS_RPC="https://api.node.glif.io"
40 changes: 23 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,15 @@ This will clone the hardhat kit onto your computer, switch directories into the
You can get a private key from a wallet provider [such as Metamask](https://metamask.zendesk.com/hc/en-us/articles/360015289632-How-to-export-an-account-s-private-key).


## Add your Private Key as an Environment Variable
## Setting Environment Variables

Add your private key as an environment variable by running this command:
Add your private key as an environment variable inside the `.env` file:

```bash
export PRIVATE_KEY='abcdef'
PRIVATE_KEY='abcdef'
```

If you use a .env file, don't commit and push any changes to .env files that may contain sensitive information, such as a private key! If this information reaches a public GitHub repository, someone can use it to check if you have any Mainnet funds in that wallet address, and steal them!
Don't commit and push any changes to .env files that may contain sensitive information, such as a private key! If this information reaches a public GitHub repository, someone can use it to check if you have any Mainnet funds in that wallet address, and steal them!


## Get the Deployer Address
Expand Down Expand Up @@ -66,7 +66,7 @@ yarn hardhat deploy
This will compile the DealStatus contract and deploy it to the Calibrationnet test network automatically!

Keep note of the deployed contract address - the service node will need it to interact with the contract.
Update the `contractInstance` variable in `api/service.js` with the deployed contract address.
**Update the `contractInstance` variable in `api/service.js` with the deployed contract address.**

There's a contract interface in the `contracts/interfaces` directory that `DealStatus` inherits from. If you would like to create your own contract different from `DealStatus`, be sure to inherit from and override the methods in the interface.

Expand All @@ -85,7 +85,9 @@ yarn start # This starts up the frontend

You can access a frontend of the app at [localhost:1337](http://localhost:1337/).

Several test cases regarding the service's functionality are located in `api/tests`. To run them, run the following command:
**Note: some processes that the service performs (such as uploading deals to lighthouse) may take up to 24 hours. Once you submit the deal, you do not need to keep the node running.** The node will attempt to finish incomplete jobs on startup by reading from the state-persisting files it creates in cache whenever jobs are registered.

Several test cases for the service's functionality are located in `api/tests`. To run them, run the following command:

```bash
# Tests the interaction for API calls into service
Expand All @@ -95,23 +97,28 @@ yarn test-edge
yarn test-lighthouse
```

**Note: some processes that the service performs (such as uploading deals to lighthouse) may take up to 24 hours. Once you submit the deal, you do not need to keep the node running. Incomplete jobs will be maintained by the node. The node service has local state persistence in the `cache` directory in case of shutdown.**
### How RaaS Works

To innovate new use cases, you'll have to take apart your app. The RaaS application has two components: the API frontend and the smart contract backend.

The service performs the following:
The backend stores the CID of the file and the infos used to complete the storage deal (e.g. the proof that the file is included on chain). It also has functionality to return active deals made with a particular CID, as well as deals that are about to expire.

The API frontend performs the following:
- **Allows users to register various jobs to be performed by the service (performed by default every 12 hours)**.
- **Replication**: When building a storage solution with FVM on Filecoin, storage deals need to be replicated across geo location, policy sizes and reputation. Teams building data solutions will pay FIL in order to make sure their data can be replicated N times across a number of selected storage providers, either one-off or continuously responding to SP faults. The job should get all the active deals of the cid, if the number of active deals is smaller than replication_target, the worker retrieves the data (see the retrieval section below), create a new deal using aggregators (see the create a new deal section below), and send the cid to the aggregator smart contract.
- **Renewal**: When building storage solutions with FVM on Filecoin, storage deals need to be live for a long time. This service should be able to take an existing deal and renew it with the same or a different SP. Teams building data solutions will pay FIL in order to make sure their data can be renewed when it comes close to the end of its lifetime, or renew on another SP whenever they need to do so. For ‘renew’ job, the job gets all the active deals of the cid, if any deal is expiring, perform a retrieval and submit the retrieved data to aggregators to create a new deal and send the cid to the aggregator smart contract.
- **Repair**: When building storage solutions with FVM on Filecoin, storage deals need to be stable. This service should be able to take an existing deal and repair it with the same or a different SP. Teams building data solutions will pay FIL in order to make sure their data can be repaired when it comes close to the end of its lifetime, or repair on another SP whenever they need to do so. The node checks that the deal has been verified previously, and if the deal has been inactive for more than `repair_threshold` epochs. If so, the worker resubmits the deal to the smart contract and creates a new deal.
- **Monitors smart contract for new deal submissions and creates a new deal with the aggregator node**.
- The node listens to the `SubmitAggregatorRequest` event in aggregators’ smart contract, and trigger the following workflow whenever it sees a new SubmitAggregatorRequest event. The flow similarly assumes that the data of the cid is discoverable for aggregators. If not, upload it first. The steps below are not implemented and are left empty for developers to implement.
- **Replication**: When building a storage solution with FVM on Filecoin, storage deals need to be replicated across geo location, policy sizes and reputation. Replication deals ensure that data can be replicated N times across a number of storage providers.
- **Renewal**: When building storage solutions with FVM on Filecoin, storage deals need to be live for a long time. This service should be able to take an existing deal and renew it with the same or a different storage provider.
- **Repair**: When building storage solutions with FVM on Filecoin, storage deals need to be stable. Repair jobs ensure that data can be maintained when it comes close to the end of its lifetime, or if the data somehow becomes inactive and needs to be repaired via. another storage provider.
- **Monitors Smart Contract**: The node listens to the `SubmitAggregatorRequest` event in aggregators’ smart contract, and trigger the following workflow whenever it sees a new SubmitAggregatorRequest event.
- 1. A new`SubmitAggregatorRequest` event comes in, the node saves save the `txId` and `cid`, and go to the next step
- 2. Create a new deal with aggregators ([see this section](https://www.notion.so/Renew-Replication-Starter-Kit-f57af3ebd221462b8b8ef2714178865a?pvs=21)) by retrieving and uploading the data
- 2. Create a new deal with aggregators by retrieving and uploading the data
- The response contains an ID, which is the `content_id`
- 3. [Use the content_id to check the upload’s status](https://github.com/application-research/edge-ur/blob/car-gen/docs/aggregation.md#checking-the-status-by-content-id)
- 4. Periodically poll the API above, and once `deal_id` becomes non-zero, proceed to the next step
- 5. Post the `deal_id`, `inclusion_proof`, and `verifier_data` back to [the aggregators’ smart contract](https://github.com/application-research/fevm-data-segment/blob/main/contracts/aggregator-oracle/edge.sol#L52) by calling the `complete` method, along with the `txId` and `cid`

### Usage
For a more detailed guide, check out the [documentation](https://www.notion.so/Renew-Replication-Starter-Kit-f57af3ebd221462b8b8ef2714178865a).

## API Usage

Once you start up the server, the POST endpoint will be available at the designated port.

Expand All @@ -138,15 +145,14 @@ curl --location 'http://localhost:1337/api/register_job' \
--header 'User-Agent: SMB Redirect/1.0.0' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--header 'Authorization: Basic ZDU5MWYyYzQtMzk0MS00ZWM4LTkyNTQtYjgzZDg1NmI2YmU5Om1xZkU5eklsVFFOdGVIUnY2WDEwQXVmYkNlN0pIUXVC' \
--data-urlencode 'cid=QmbY5ZWR4RjxG82eUeWCmsVD1MrHNZhBQz5J4yynKLvgfZ' \
--data-urlencode 'cid=QmYSNU2i62v4EFvLehikb4njRiBrcWqH6STpMwduDcNmK6' \
--data-urlencode 'endDate=2023-07-15' \
--data-urlencode 'jobType=replication' \
--data-urlencode 'replicationTarget=1' \
--data-urlencode 'aggregator=lighthouse' \
--data-urlencode 'epochs=1000'
```

Note:
The `aggregator` field can be one of the following: `edge`, or `lighthouse`. This changes the type of aggregator node that the service will use to interact with the Filecoin network.

The `jobType` field can be one of the following: `renew`, `replicate`, or `repair`. This changes the type of job that the service will perform.
Expand Down
103 changes: 47 additions & 56 deletions api/lighthouseAggregator.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,14 @@ const path = require('path');
const { ethers } = require("hardhat");
const EventEmitter = require('events');
const sleep = require('util').promisify(setTimeout);
const { spawn } = require('child_process');
const lighthouse = require('@lighthouse-web3/sdk');

// Location of fetched data for each CID from edge
const dataDownloadDir = path.join(__dirname, 'download');
const lighthouseDealDownloadEndpoint = process.env.LIGHTHOUSE_DEAL_DOWNLOAD_ENDPOINT;
const lighthouseDealInfosEndpoint = process.env.LIGHTHOUSE_DEAL_INFOS_ENDPOINT;
const lighthousePinEndpoint = process.env.LIGHTHOUSE_PIN_ENDPOINT

if (!lighthouseDealDownloadEndpoint) {
throw new Error("Missing environment variables: data endpoints");
}
Expand All @@ -32,9 +33,7 @@ class LighthouseAggregator {
// For any files that do, poll the deal status
this.aggregatorJobs.forEach(async job => {
if (!job.lighthouse_cid) {
console.log("Redownloading file with CID: ", job.cid);
await this.downloadFile(job.cid);
const lighthouse_cid = await this.uploadFileAndMakeDeal(path.join(dataDownloadDir, job.cid));
const lighthouse_cid = await this.pinCIDAndMakeDeal(job.cid);
job.lighthouse_cid = lighthouse_cid;
this.saveState();
}
Expand All @@ -44,39 +43,24 @@ class LighthouseAggregator {
}

async processFile(cid, txID) {
let downloaded_file_path;
let lighthouse_cid;

// Try to download the file only if the cid is new
// Queue jobs only if the cid is new
if (!this.aggregatorJobs.some(job => job.cid == cid)) {
try {
downloaded_file_path = await this.downloadFile(cid);
this.enqueueJob(cid, txID);
this.saveState();
} catch (err) {
// If an error occurred, log it
console.error(`Failed to download file: ${err}`);
return;
}
this.enqueueJob(cid, txID);
this.saveState();
} else {
// If the file has already been downloaded, use the existing file
downloaded_file_path = path.join(dataDownloadDir, cid);
// Update the txID for the job
this.aggregatorJobs.find(job => job.cid == cid).txID = txID;
}

// Wait for the file to be downloaded
await sleep(2500);

// Upload the file (either the downloaded one or the error file)
lighthouse_cid = await this.uploadFileAndMakeDeal(downloaded_file_path);
// Pin cid to lighthouse
const lighthouse_cid = await this.pinCIDAndMakeDeal(cid);

// Find the job with the matching CID and update the lighthouse_cid
// lighthouse_cid depends on whether or not content was uploaded to edge or lighthouse.
this.aggregatorJobs.find(job => job.cid == cid).lighthouse_cid = lighthouse_cid;
this.saveState();

return lighthouse_cid;
return cid;
}

async processDealInfos(maxRetries, initialDelay, lighthouse_cid) {
Expand All @@ -86,7 +70,8 @@ class LighthouseAggregator {
try {
let response = await axios.get(lighthouseDealInfosEndpoint, {
params: {
cid: lighthouse_cid
cid: lighthouse_cid,
network: "testnet" // Change the network to mainnet when ready
}
})
if (!response.data) {
Expand All @@ -101,14 +86,24 @@ class LighthouseAggregator {
this.aggregatorJobs = this.aggregatorJobs.filter(job => job.contentID != contentID);
return;
}
let dealIds = [];
let miner = [];
response.data.dealInfo.forEach(item => {
dealIds.push(item.dealId);
miner.push(item.storageProvider.replace("t0", ""));
});
let dealInfos = {
txID: job.txID,
dealID: response.data.dealInfo[0].dealId,
dealID: dealIds,
inclusion_proof: response.data.proof.fileProof.inclusionProof,
verifier_data: response.data.proof.fileProof.verifierData,
miner: response.data.dealInfo[0].storageProvider.replace("f0", ""),
// For each deal, the miner address is returned with a t0 prefix
// Replace the t0 prefix with an empty string to get the address
miner: miner,
}
if (dealInfos.dealID != 0) {
// If we receive a nonzero dealID, emit the DealReceived event
if (dealInfos.dealID[0] != null) {
console.log("Lighthouse deal infos processed after receiving nonzero dealID: ", dealInfos);
this.eventEmitter.emit('DealReceived', dealInfos);
// Remove the job from the list
this.aggregatorJobs = this.aggregatorJobs.filter(job => job.lighthouse_cid != lighthouse_cid);
Expand All @@ -120,17 +115,18 @@ class LighthouseAggregator {
}
}
} catch (e) {
console.log("Error polling lighthouse for lighthouse_cid: ", lighthouse_cid);
console.log("Error polling lighthouse for lighthouse_cid: ", lighthouse_cid + e);
}
await sleep(delay);
delay *= 2;
}
this.eventEmitter.emit('error', new Error('All retries failed, totaling: ' + maxRetries));
}
}

async uploadFileAndMakeDeal(filePath) {
try {
const response = await lighthouse.upload(filePath, process.env.LIGHTHOUSE_API_KEY);
const dealParams = {miner:[ process.env.MINER ], repair_threshold: null, renew_threshold: null, network: process.env.NETWORK};
const response = await lighthouse.upload(filePath, process.env.LIGHTHOUSE_API_KEY, false, dealParams);
const lighthouse_cid = response.data.Hash;
console.log("Uploaded file, lighthouse_cid: ", lighthouse_cid);
return lighthouse_cid;
Expand All @@ -139,31 +135,26 @@ class LighthouseAggregator {
}
}

async downloadFile(lighthouse_cid, downloadPath = path.join(dataDownloadDir, lighthouse_cid)) {
console.log("Downloading file with CID: ", lighthouse_cid);
let response;

// Ensure 'download' directory exists
fs.mkdir(dataDownloadDir, {
recursive: true
}, (err) => {
if (err) {
console.error(err);
}
});

response = await axios({
method: 'GET',
url: `${lighthouseDealDownloadEndpoint}${lighthouse_cid}`,
responseType: 'stream',
});

async pinCIDAndMakeDeal(cidString) {
try {
const filePath = await this.saveResponseToFile(response, downloadPath);
console.log(`File saved at ${filePath}`);
return filePath
} catch (err) {
console.error(`Error saving file: ${err}`);
const data = {
"cid": cidString,
"raas": {
"network": "calibration",
}
}
const pinResponse = await axios.post(
lighthousePinEndpoint,
data,
{
headers: {
'Authorization': `Bearer ${process.env.LIGHTHOUSE_API_KEY}`
}
}
)
return cidString;
} catch (error) {
console.error('An error occurred:', error);
}
}

Expand Down
Loading