-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SLT-315] refactor(opbot): use gql api #3260
base: master
Are you sure you want to change the base?
Conversation
Deploying sanguine-fe with Cloudflare Pages
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #3260 +/- ##
====================================================
- Coverage 44.38046% 25.92216% -18.45830%
====================================================
Files 137 98 -39
Lines 5383 3931 -1452
Branches 356 82 -274
====================================================
- Hits 2389 1019 -1370
+ Misses 2979 2906 -73
+ Partials 15 6 -9
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
WalkthroughThe changes in this pull request introduce a new Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Outside diff range and nitpick comments (8)
contrib/opbot/internal/model.go (6)
3-9
: Add a comment for the exported typeGetRFQByTxIDResponse
The struct is well-structured and follows Go conventions. However, to improve documentation and address the static analysis hint, please add a comment describing the purpose and usage of this type.
Example comment:
// GetRFQByTxIDResponse represents the complete response structure for a Request for Quotation (RFQ) lookup by transaction ID. // It includes details about the bridge, request, relay, proof, and claim associated with the RFQ.🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 3-3:
exported: exported type GetRFQByTxIDResponse should have comment or be unexported (revive)
11-23
: Add a comment and consider changing the type ofSendChainGas
The
Bridge
struct is well-structured and follows Go conventions. However, please consider the following improvements:
Add a comment describing the purpose and usage of this type to improve documentation and address the static analysis hint.
The
SendChainGas
field is currently of typeint
. Depending on the expected range of values, this might not be sufficient. Consider usinguint64
orbig.Int
if larger values are possible.Example comment and type change:
// Bridge represents the core information of a bridge transaction, // including details about the origin and destination chains, tokens, and amounts. type Bridge struct { // ... other fields ... SendChainGas uint64 `json:"sendChainGas"` }🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 11-11:
exported: exported type Bridge should have comment or be unexported (revive)
25-29
: Add a comment and consider changing the type ofBlockNumber
The
BridgeRequest
struct is well-structured and follows Go conventions. However, please consider the following improvements:
Add a comment describing the purpose and usage of this type to improve documentation and address the static analysis hint.
The
BlockNumber
field is currently of typestring
. Depending on how this value is used, consider changing it to a numeric type likeuint64
for better type safety and easier numerical operations.Example comment and type change:
// BridgeRequest represents the details of a bridge request, // including the block number, timestamp, and transaction hash. type BridgeRequest struct { BlockNumber uint64 `json:"blockNumber,string"` BlockTimestamp int64 `json:"blockTimestamp"` TransactionHash string `json:"transactionHash"` }Note: If you need to keep the JSON representation as a string, you can use the
,string
tag as shown above.🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 25-25:
exported: exported type BridgeRequest should have comment or be unexported (revive)
31-37
: Add a comment and consider changing the type ofBlockNumber
The
BridgeRelay
struct is well-structured and follows Go conventions. However, please consider the following improvements:
Add a comment describing the purpose and usage of this type to improve documentation and address the static analysis hint.
The
BlockNumber
field is currently of typestring
. Depending on how this value is used, consider changing it to a numeric type likeuint64
for better type safety and easier numerical operations.Example comment and type change:
// BridgeRelay represents the details of a bridge relay operation, // including block information, transaction hash, relayer, and destination address. type BridgeRelay struct { BlockNumber uint64 `json:"blockNumber,string"` BlockTimestamp int64 `json:"blockTimestamp"` TransactionHash string `json:"transactionHash"` Relayer string `json:"relayer"` To string `json:"to"` }Note: If you need to keep the JSON representation as a string, you can use the
,string
tag as shown above.🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 31-31:
exported: exported type BridgeRelay should have comment or be unexported (revive)
39-44
: Add a comment and consider changing the type ofBlockNumber
The
BridgeProof
struct is well-structured and follows Go conventions. However, please consider the following improvements:
Add a comment describing the purpose and usage of this type to improve documentation and address the static analysis hint.
The
BlockNumber
field is currently of typestring
. Depending on how this value is used, consider changing it to a numeric type likeuint64
for better type safety and easier numerical operations.Example comment and type change:
// BridgeProof represents the proof details for a bridge operation, // including block information, transaction hash, and relayer address. type BridgeProof struct { BlockNumber uint64 `json:"blockNumber,string"` BlockTimestamp int64 `json:"blockTimestamp"` TransactionHash string `json:"transactionHash"` Relayer string `json:"relayer"` }Note: If you need to keep the JSON representation as a string, you can use the
,string
tag as shown above.🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 39-39:
exported: exported type BridgeProof should have comment or be unexported (revive)
46-53
: Add a comment and consider changing the types ofBlockNumber
andAmountFormatted
The
BridgeClaim
struct is well-structured and follows Go conventions. However, please consider the following improvements:
Add a comment describing the purpose and usage of this type to improve documentation and address the static analysis hint.
The
BlockNumber
field is currently of typestring
. Depending on how this value is used, consider changing it to a numeric type likeuint64
for better type safety and easier numerical operations.The
AmountFormatted
field is of typestring
. Depending on the precision required and how this value is used, consider using a more appropriate type such asdecimal.Decimal
from theshopspring/decimal
package for accurate decimal arithmetic.Example comment and type changes:
import "github.com/shopspring/decimal" // BridgeClaim represents the details of a bridge claim operation, // including block information, transaction details, and the claimed amount. type BridgeClaim struct { BlockNumber uint64 `json:"blockNumber,string"` BlockTimestamp int64 `json:"blockTimestamp"` TransactionHash string `json:"transactionHash"` To string `json:"to"` Relayer string `json:"relayer"` AmountFormatted decimal.Decimal `json:"amountFormatted"` }Note: If you need to keep the JSON representation as a string for
BlockNumber
, you can use the,string
tag as shown above. ForAmountFormatted
, thedecimal.Decimal
type handles JSON marshaling and unmarshaling automatically.🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 46-46:
exported: exported type BridgeClaim should have comment or be unexported (revive)contrib/opbot/botmd/botmd.go (1)
33-33
: LGTM: New rfqClient field addedThe addition of the
rfqClient
field of typeinternal.RFQClient
aligns well with the PR objective of refactoring to use a GraphQL API. This encapsulation of RFQ functionality in an internal client is a good design choice.Consider adding a brief comment explaining the purpose of this field, e.g.:
// rfqClient handles interactions with the RFQ (Request for Quote) system rfqClient internal.RFQClientcontrib/opbot/internal/client.go (1)
1-1
: Add a package comment for the 'internal' packageIt's recommended to include a package-level comment that summarizes the purpose of the package.
🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 1-1:
package-comments: should have a package comment (revive)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (7)
- contrib/opbot/botmd/botmd.go (3 hunks)
- contrib/opbot/botmd/commands.go (5 hunks)
- contrib/opbot/botmd/commands_test.go (0 hunks)
- contrib/opbot/botmd/export_test.go (0 hunks)
- contrib/opbot/config/config.go (1 hunks)
- contrib/opbot/internal/client.go (1 hunks)
- contrib/opbot/internal/model.go (1 hunks)
💤 Files with no reviewable changes (2)
- contrib/opbot/botmd/commands_test.go
- contrib/opbot/botmd/export_test.go
🧰 Additional context used
🪛 GitHub Check: Lint (contrib/opbot)
contrib/opbot/botmd/commands.go
[failure] 194-194:
G115: integer overflow conversion int -> uint32 (gosec)
[failure] 210-210:
G115: integer overflow conversion int -> uint32 (gosec)contrib/opbot/internal/client.go
[failure] 66-66:
ifElseChain: rewrite if-else to switch statement (gocritic)
[failure] 1-1:
package-comments: should have a package comment (revive)contrib/opbot/internal/model.go
[failure] 3-3:
exported: exported type GetRFQByTxIDResponse should have comment or be unexported (revive)
[failure] 11-11:
exported: exported type Bridge should have comment or be unexported (revive)
[failure] 25-25:
exported: exported type BridgeRequest should have comment or be unexported (revive)
[failure] 31-31:
exported: exported type BridgeRelay should have comment or be unexported (revive)
[failure] 39-39:
exported: exported type BridgeProof should have comment or be unexported (revive)
[failure] 46-46:
exported: exported type BridgeClaim should have comment or be unexported (revive)
🔇 Additional comments (2)
contrib/opbot/internal/model.go (1)
1-53
: Overall structure is good, but some improvements are recommendedThe file is well-structured and follows Go conventions. The defined structs provide a clear representation of the RFQ (Request for Quotation) process and related components. However, there are a few areas for improvement:
- Add comments for all exported types to improve documentation and address static analysis hints.
- Consider changing the type of
BlockNumber
fields fromstring
touint64
for better type safety and easier numerical operations.- For the
AmountFormatted
field inBridgeClaim
, consider using a more appropriate type for decimal arithmetic, such asdecimal.Decimal
.- Review the
SendChainGas
field in theBridge
struct to ensure theint
type is sufficient for all possible gas values.These changes will enhance the overall quality, maintainability, and type safety of the code.
🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 3-3:
exported: exported type GetRFQByTxIDResponse should have comment or be unexported (revive)
[failure] 11-11:
exported: exported type Bridge should have comment or be unexported (revive)
[failure] 25-25:
exported: exported type BridgeRequest should have comment or be unexported (revive)
[failure] 31-31:
exported: exported type BridgeRelay should have comment or be unexported (revive)
[failure] 39-39:
exported: exported type BridgeProof should have comment or be unexported (revive)
[failure] 46-46:
exported: exported type BridgeClaim should have comment or be unexported (revive)contrib/opbot/botmd/botmd.go (1)
9-9
: LGTM: New import for internal packageThe addition of the
internal
package import is appropriate for the refactoring being done. Using an internal package is a good practice for code that should not be imported by external projects.
// RFQIndexerAPIURL is the URL of the RFQ indexer API. | ||
RFQIndexerAPIURL string `yaml:"rfq_indexer_api_url"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider enhancing the new RFQIndexerAPIURL
field
The addition of the RFQIndexerAPIURL
field is good, but there are a few suggestions to improve it:
- Add a comment explaining the purpose of this field, similar to other fields in the struct.
- Consider adding validation for this field in the
Validate
method if it's required for the application to function correctly. - For consistency, you might want to rename
RFQApiURL
toRFQAPIUrl
to match the naming convention of the new field.
Here's a suggested improvement for the field declaration:
+ // RFQIndexerAPIURL is the URL of the RFQ indexer API.
RFQIndexerAPIURL string `yaml:"rfq_indexer_api_url"`
Consider updating the Validate
method to include a check for this new field if necessary:
func (c *Config) Validate() error {
// ... existing validations ...
if c.RFQIndexerAPIURL == "" {
return errors.New("rfq_indexer_api_url is required")
}
return nil
}
handler: handler, | ||
cfg: cfg, | ||
server: server, | ||
logger: experimentalLogger.MakeWrappedSugaredLogger(sugaredLogger), | ||
rfqClient: internal.NewRFQClient(handler, cfg.RFQIndexerAPIURL, cfg.RelayerURLS), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Issue Found: Missing RFQIndexerAPIURL
in config.Config
The RFQIndexerAPIURL
field is missing in the config.Config
struct, which is required for the proper initialization of rfqClient
.
- File:
contrib/opbot/config/config.go
🔗 Analysis chain
LGTM: Initialization of rfqClient and improved formatting
The initialization of the rfqClient
in the NewBot
function is correct and aligns with the new field added to the Bot
struct. The reformatting of the struct initialization improves readability.
Please ensure that the RFQIndexerAPIURL
and RelayerURLS
fields have been added to the config.Config
struct. Run the following script to verify:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the presence of new config fields
# Test: Check for RFQIndexerAPIURL and RelayerURLS in the config struct
rg --type go 'type\s+Config\s+struct' -A 20 | rg 'RFQIndexerAPIURL|RelayerURLS'
Length of output: 240
contrib/opbot/internal/client.go
Outdated
SetResult(&res). | ||
Get(fmt.Sprintf(getRequestByTxHash, txID)) | ||
if err != nil { | ||
return nil, "", fmt.Errorf("failed to get quote request by tx hash: %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the error message in GetRFQByTxID
function
The error message refers to "tx hash" instead of "tx ID". Update it for consistency with the function's purpose.
Apply this diff to fix the error message:
- return nil, "", fmt.Errorf("failed to get quote request by tx hash: %w", err)
+ return nil, "", fmt.Errorf("failed to get quote request by tx ID: %w", err)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return nil, "", fmt.Errorf("failed to get quote request by tx hash: %w", err) | |
return nil, "", fmt.Errorf("failed to get quote request by tx ID: %w", err) |
contrib/opbot/internal/client.go
Outdated
for _, relayerClient := range r.relayerClients { | ||
resp, err = relayerClient.GetQuoteRequestByTxHash(ctx, txHash) | ||
if err != nil { | ||
return nil, fmt.Errorf("failed to get quote request by tx hash: %w", err) | ||
} | ||
} | ||
|
||
return resp, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle errors from multiple relayer clients appropriately
Currently, the function returns on the first error encountered, which means it doesn't attempt to fetch from other relayer clients. Consider modifying the logic to try all clients before returning an error.
Apply this diff to continue trying other clients on error:
func (r *rfqClientImpl) GetRFQByTxHash(ctx context.Context, txHash string) (*relapi.GetQuoteRequestResponse, error) {
var resp *relapi.GetQuoteRequestResponse
- var err error
+ var errs []error
for _, relayerClient := range r.relayerClients {
resp, err := relayerClient.GetQuoteRequestByTxHash(ctx, txHash)
if err != nil {
- return nil, fmt.Errorf("failed to get quote request by tx hash: %w", err)
+ errs = append(errs, err)
+ continue
}
+ return resp, nil
}
- }
-
- return resp, nil
+ return nil, fmt.Errorf("failed to get quote request by tx hash from all relayer clients: %v", errs)
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for _, relayerClient := range r.relayerClients { | |
resp, err = relayerClient.GetQuoteRequestByTxHash(ctx, txHash) | |
if err != nil { | |
return nil, fmt.Errorf("failed to get quote request by tx hash: %w", err) | |
} | |
} | |
return resp, nil | |
func (r *rfqClientImpl) GetRFQByTxHash(ctx context.Context, txHash string) (*relapi.GetQuoteRequestResponse, error) { | |
var resp *relapi.GetQuoteRequestResponse | |
var errs []error | |
for _, relayerClient := range r.relayerClients { | |
resp, err := relayerClient.GetQuoteRequestByTxHash(ctx, txHash) | |
if err != nil { | |
errs = append(errs, err) | |
continue | |
} | |
return resp, nil | |
} | |
return nil, fmt.Errorf("failed to get quote request by tx hash from all relayer clients: %v", errs) | |
} |
contrib/opbot/internal/client.go
Outdated
if res.BridgeClaim != (BridgeClaim{}) { | ||
status = "Claimed" | ||
} else if res.BridgeProof != (BridgeProof{}) { | ||
status = "Proven" | ||
} else if res.BridgeRelay != (BridgeRelay{}) { | ||
status = "Relayed" | ||
} else if res.BridgeRequest != (BridgeRequest{}) { | ||
status = "Requested" | ||
} else { | ||
status = "Unknown" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor the if-else chain into a switch statement
The current if-else chain can be simplified using a switch statement for improved readability and maintainability.
Apply this diff to refactor the code:
var status string
- if res.BridgeClaim != (BridgeClaim{}) {
+ switch {
+ case res.BridgeClaim != (BridgeClaim{}):
status = "Claimed"
- } else if res.BridgeProof != (BridgeProof{}) {
+ case res.BridgeProof != (BridgeProof{}):
status = "Proven"
- } else if res.BridgeRelay != (BridgeRelay{}) {
+ case res.BridgeRelay != (BridgeRelay{}):
status = "Relayed"
- } else if res.BridgeRequest != (BridgeRequest{}) {
+ case res.BridgeRequest != (BridgeRequest{}):
status = "Requested"
- } else {
+ default:
status = "Unknown"
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if res.BridgeClaim != (BridgeClaim{}) { | |
status = "Claimed" | |
} else if res.BridgeProof != (BridgeProof{}) { | |
status = "Proven" | |
} else if res.BridgeRelay != (BridgeRelay{}) { | |
status = "Relayed" | |
} else if res.BridgeRequest != (BridgeRequest{}) { | |
status = "Requested" | |
} else { | |
status = "Unknown" | |
} | |
var status string | |
switch { | |
case res.BridgeClaim != (BridgeClaim{}): | |
status = "Claimed" | |
case res.BridgeProof != (BridgeProof{}): | |
status = "Proven" | |
case res.BridgeRelay != (BridgeRelay{}): | |
status = "Relayed" | |
case res.BridgeRequest != (BridgeRequest{}): | |
status = "Requested" | |
default: | |
status = "Unknown" | |
} |
🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 66-66:
ifElseChain: rewrite if-else to switch statement (gocritic)
}, | ||
{ | ||
Type: slack.MarkdownType, | ||
Text: fmt.Sprintf("*OriginTxHash*: %s", toTXSlackLink(res.BridgeRequest.TransactionHash, uint32(res.Bridge.OriginChainID))), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential integer overflow when converting int
to uint32
At line 194, res.Bridge.OriginChainID
is being converted from int
to uint32
using uint32(res.Bridge.OriginChainID)
. This conversion can lead to an integer overflow if res.Bridge.OriginChainID
exceeds the maximum value of a uint32
(4,294,967,295). Such an overflow could result in incorrect chain IDs and unexpected behavior.
Consider verifying that res.Bridge.OriginChainID
always falls within the uint32
range. If there is a possibility of higher values, you might:
- Use a larger unsigned integer type like
uint64
. - Modify the
toTXSlackLink
function to accept anint
orint64
to avoid unnecessary and potentially unsafe type conversions.
🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 194-194:
G115: integer overflow conversion int -> uint32 (gosec)
} else { | ||
objects = append(objects, &slack.TextBlockObject{ | ||
Type: slack.MarkdownType, | ||
Text: fmt.Sprintf("*DestTxHash*: %s", toTXSlackLink(res.BridgeRelay.TransactionHash, uint32(res.Bridge.DestChainID))), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential integer overflow when converting int
to uint32
At line 210, res.Bridge.DestChainID
is being converted from int
to uint32
using uint32(res.Bridge.DestChainID)
. Similar to the issue at line 194, this could cause an integer overflow if res.Bridge.DestChainID
exceeds the uint32
maximum value, leading to incorrect chain IDs.
Ensure that res.Bridge.DestChainID
is within the uint32
range. If not, consider:
- Switching to a larger unsigned integer type like
uint64
. - Adjusting the
toTXSlackLink
function to accept anint
orint64
to safely handle larger chain ID values without risking overflow.
🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 210-210:
G115: integer overflow conversion int -> uint32 (gosec)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (6)
contrib/opbot/internal/client.go (1)
28-50
: LGTM with a suggestion: Consider adding URL validation.The
rfqClientImpl
struct andNewRFQClient
function are well-implemented. The client is properly configured with headers and tracing. However, consider adding validation for theindexerURL
andrelayerURLs
to ensure they are valid before creating the clients.Consider adding URL validation before creating the clients. Here's a simple example:
import "net/url" func validateURL(urlStr string) error { _, err := url.ParseRequestURI(urlStr) return err } // In NewRFQClient if err := validateURL(indexerURL); err != nil { return nil, fmt.Errorf("invalid indexer URL: %w", err) } for _, urlStr := range relayerURLs { if err := validateURL(urlStr); err != nil { return nil, fmt.Errorf("invalid relayer URL %s: %w", urlStr, err) } // ... create relayer client }This will help prevent runtime errors due to invalid URLs.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 34-44: contrib/opbot/internal/client.go#L34-L44
Added lines #L34 - L44 were not covered by tests
[warning] 46-49: contrib/opbot/internal/client.go#L46-L49
Added lines #L46 - L49 were not covered by testscontrib/opbot/botmd/commands.go (5)
167-170
: Add unit tests for new RFQ client callThe new interaction with
b.rfqClient.GetRFQByTxID
is not covered by tests. Adding unit tests for this functionality will help catch potential issues and improve code reliability.🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 167-170: contrib/opbot/botmd/commands.go#L167-L170
Added lines #L167 - L170 were not covered by tests
179-201
: Enhance test coverage for Slack message constructionThe block constructing
objects
for the Slack message is currently not covered by tests. Consider adding unit tests to ensure the message formatting and content are correct.🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 179-201: contrib/opbot/botmd/commands.go#L179-L201
Added lines #L179 - L201 were not covered by tests
203-213
: VerifyDestTxHash
handling for different statusesEnsure that the logic for appending
DestTxHash
toobjects
handles all possible statuses correctly. Consider adding unit tests to cover both the "Requested" status and other cases.🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 203-213: contrib/opbot/botmd/commands.go#L203-L213
Added lines #L203 - L213 were not covered by tests
243-243
: Add unit tests for refund functionalityThe call to
b.rfqClient.GetRFQByTxHash
introduces new logic for refunds that is not currently tested. Adding unit tests will help verify that the refund process works as intended.🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 243-243: contrib/opbot/botmd/commands.go#L243
Added line #L243 was not covered by tests
349-350
: Add unit tests forgetTxAge
functionThe new
getTxAge
function is crucial for accurate time display. Adding unit tests will help ensure it behaves as expected with various timestamp inputs.🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 349-350: contrib/opbot/botmd/commands.go#L349-L350
Added lines #L349 - L350 were not covered by tests
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
- contrib/opbot/botmd/commands.go (5 hunks)
- contrib/opbot/internal/client.go (1 hunks)
- contrib/opbot/internal/model.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- contrib/opbot/internal/model.go
🧰 Additional context used
🪛 GitHub Check: codecov/patch
contrib/opbot/botmd/commands.go
[warning] 167-170: contrib/opbot/botmd/commands.go#L167-L170
Added lines #L167 - L170 were not covered by tests
[warning] 179-201: contrib/opbot/botmd/commands.go#L179-L201
Added lines #L179 - L201 were not covered by tests
[warning] 203-213: contrib/opbot/botmd/commands.go#L203-L213
Added lines #L203 - L213 were not covered by tests
[warning] 216-218: contrib/opbot/botmd/commands.go#L216-L218
Added lines #L216 - L218 were not covered by tests
[warning] 243-243: contrib/opbot/botmd/commands.go#L243
Added line #L243 was not covered by tests
[warning] 306-309: contrib/opbot/botmd/commands.go#L306-L309
Added lines #L306 - L309 were not covered by tests
[warning] 313-313: contrib/opbot/botmd/commands.go#L313
Added line #L313 was not covered by tests
[warning] 349-350: contrib/opbot/botmd/commands.go#L349-L350
Added lines #L349 - L350 were not covered by tests
[warning] 356-356: contrib/opbot/botmd/commands.go#L356
Added line #L356 was not covered by testscontrib/opbot/internal/client.go
[warning] 34-44: contrib/opbot/internal/client.go#L34-L44
Added lines #L34 - L44 were not covered by tests
[warning] 46-49: contrib/opbot/internal/client.go#L46-L49
Added lines #L46 - L49 were not covered by tests
[warning] 53-60: contrib/opbot/internal/client.go#L53-L60
Added lines #L53 - L60 were not covered by tests
[warning] 62-64: contrib/opbot/internal/client.go#L62-L64
Added lines #L62 - L64 were not covered by tests
[warning] 66-77: contrib/opbot/internal/client.go#L66-L77
Added lines #L66 - L77 were not covered by tests
[warning] 80-80: contrib/opbot/internal/client.go#L80
Added line #L80 was not covered by tests
[warning] 84-91: contrib/opbot/internal/client.go#L84-L91
Added lines #L84 - L91 were not covered by tests
[warning] 94-94: contrib/opbot/internal/client.go#L94
Added line #L94 was not covered by tests
🔇 Additional comments (5)
contrib/opbot/internal/client.go (5)
1-14
: LGTM: Package declaration and imports look good.The package name "internal" is appropriate for an internal implementation, and the imports seem relevant to the RFQ client functionality being implemented.
16-18
: LGTM: Constant declaration is appropriate.The
getRequestByTxHash
constant is well-named and follows Go conventions. It's used for constructing the API endpoint, which is a good practice for maintaining consistency and ease of updates.
20-26
: LGTM: RFQClient interface is well-defined.The
RFQClient
interface is well-documented and provides a clear abstraction for RFQ operations. The method signatures are appropriate and follow Go conventions.
52-81
: LGTM: GetRFQByTxID implementation looks good.The
GetRFQByTxID
method is well-implemented with proper error handling and status determination. The switch statement for status determination is a good improvement from the previous if-else chain.The error message in line 59 still refers to "tx hash" instead of "tx ID". This issue was pointed out in a previous review. Please update it for consistency:
- return nil, "", fmt.Errorf("failed to get quote request by tx hash: %w", err) + return nil, "", fmt.Errorf("failed to get quote request by tx ID: %w", err)🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 53-60: contrib/opbot/internal/client.go#L53-L60
Added lines #L53 - L60 were not covered by tests
[warning] 62-64: contrib/opbot/internal/client.go#L62-L64
Added lines #L62 - L64 were not covered by tests
[warning] 66-77: contrib/opbot/internal/client.go#L66-L77
Added lines #L66 - L77 were not covered by tests
[warning] 80-80: contrib/opbot/internal/client.go#L80
Added line #L80 was not covered by tests
97-97
: LGTM: Good practice for interface compliance check.The type assertion
var _ RFQClient = &rfqClientImpl{}
is a great practice. It ensures at compile-time thatrfqClientImpl
correctly implements theRFQClient
interface.
contrib/opbot/internal/client.go
Outdated
// GetRFQByTxHash gets a quote request by transaction hash. | ||
func (r *rfqClientImpl) GetRFQByTxHash(ctx context.Context, txHash string) (*relapi.GetQuoteRequestResponse, error) { | ||
var resp *relapi.GetQuoteRequestResponse | ||
var err error | ||
for _, relayerClient := range r.relayerClients { | ||
resp, err = relayerClient.GetQuoteRequestByTxHash(ctx, txHash) | ||
if err != nil { | ||
return nil, fmt.Errorf("failed to get quote request by tx hash: %w", err) | ||
} | ||
} | ||
|
||
return resp, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve error handling in GetRFQByTxHash method.
The current implementation of GetRFQByTxHash
returns on the first error encountered, which means it doesn't attempt to fetch from other relayer clients. This might not be optimal if some relayer clients are temporarily unavailable.
Consider modifying the logic to try all clients before returning an error. Here's a suggested improvement:
func (r *rfqClientImpl) GetRFQByTxHash(ctx context.Context, txHash string) (*relapi.GetQuoteRequestResponse, error) {
var lastErr error
for _, relayerClient := range r.relayerClients {
resp, err := relayerClient.GetQuoteRequestByTxHash(ctx, txHash)
if err == nil {
return resp, nil
}
lastErr = err
}
return nil, fmt.Errorf("failed to get quote request by tx hash from all relayer clients: %w", lastErr)
}
This change will attempt to fetch from all available relayer clients before giving up, improving the robustness of the method.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 84-91: contrib/opbot/internal/client.go#L84-L91
Added lines #L84 - L91 were not covered by tests
[warning] 94-94: contrib/opbot/internal/client.go#L94
Added line #L94 was not covered by tests
contrib/opbot/internal/client.go
Outdated
// Package internal provides the RFQ client implementation. | ||
package internal | ||
|
||
import ( | ||
"context" | ||
"fmt" | ||
"net/http" | ||
|
||
"github.com/dubonzi/otelresty" | ||
"github.com/go-http-utils/headers" | ||
"github.com/go-resty/resty/v2" | ||
"github.com/synapsecns/sanguine/core/metrics" | ||
"github.com/synapsecns/sanguine/services/rfq/relayer/relapi" | ||
) | ||
|
||
const ( | ||
getRequestByTxHash = "/api/transaction-id/%s" | ||
) | ||
|
||
// RFQClient is the interface for the RFQ client. | ||
type RFQClient interface { | ||
// GetRFQByTxID gets a quote request by transaction ID. | ||
GetRFQByTxID(ctx context.Context, txID string) (resp *GetRFQByTxIDResponse, status string, err error) | ||
// GetRFQByTxHash gets a quote request by transaction hash. | ||
GetRFQByTxHash(ctx context.Context, txHash string) (resp *relapi.GetQuoteRequestResponse, err error) | ||
} | ||
|
||
type rfqClientImpl struct { | ||
client *resty.Client | ||
relayerClients []relapi.RelayerClient | ||
} | ||
|
||
// NewRFQClient creates a new RFQClient. | ||
func NewRFQClient(handler metrics.Handler, indexerURL string, relayerURLs []string) RFQClient { | ||
client := resty.New() | ||
client.SetBaseURL(indexerURL) | ||
client.SetHeader(headers.UserAgent, "rfq-client") | ||
|
||
otelresty.TraceClient(client, otelresty.WithTracerProvider(handler.GetTracerProvider())) | ||
|
||
var relayerClients []relapi.RelayerClient | ||
for _, url := range relayerURLs { | ||
relayerClients = append(relayerClients, relapi.NewRelayerClient(handler, url)) | ||
} | ||
|
||
return &rfqClientImpl{ | ||
client: client, | ||
relayerClients: relayerClients, | ||
} | ||
} | ||
|
||
// GetRFQByTxID gets a quote request by transaction ID. | ||
func (r *rfqClientImpl) GetRFQByTxID(ctx context.Context, txID string) (*GetRFQByTxIDResponse, string, error) { | ||
var res GetRFQByTxIDResponse | ||
resp, err := r.client.R().SetContext(ctx). | ||
SetResult(&res). | ||
Get(fmt.Sprintf(getRequestByTxHash, txID)) | ||
if err != nil { | ||
return nil, "", fmt.Errorf("failed to get quote request by tx hash: %w", err) | ||
} | ||
|
||
if resp.StatusCode() != http.StatusOK { | ||
return nil, "", fmt.Errorf("unexpected status code: %d", resp.StatusCode()) | ||
} | ||
|
||
var status string | ||
switch { | ||
case res.BridgeClaim != (BridgeClaim{}): | ||
status = "Claimed" | ||
case res.BridgeProof != (BridgeProof{}): | ||
status = "Proven" | ||
case res.BridgeRelay != (BridgeRelay{}): | ||
status = "Relayed" | ||
case res.BridgeRequest != (BridgeRequest{}): | ||
status = "Requested" | ||
default: | ||
status = "Unknown" | ||
} | ||
|
||
return &res, status, nil | ||
} | ||
|
||
// GetRFQByTxHash gets a quote request by transaction hash. | ||
func (r *rfqClientImpl) GetRFQByTxHash(ctx context.Context, txHash string) (*relapi.GetQuoteRequestResponse, error) { | ||
var resp *relapi.GetQuoteRequestResponse | ||
var err error | ||
for _, relayerClient := range r.relayerClients { | ||
resp, err = relayerClient.GetQuoteRequestByTxHash(ctx, txHash) | ||
if err != nil { | ||
return nil, fmt.Errorf("failed to get quote request by tx hash: %w", err) | ||
} | ||
} | ||
|
||
return resp, nil | ||
} | ||
|
||
var _ RFQClient = &rfqClientImpl{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve test coverage for the RFQ client implementation.
The static analysis tool indicates that several parts of this file are not covered by tests. While 100% test coverage isn't always necessary, it's important to have good test coverage for critical parts of the code, especially for a client implementation like this.
Consider adding unit tests for the following:
NewRFQClient
functionGetRFQByTxID
methodGetRFQByTxHash
method
These tests should cover various scenarios, including:
- Successful requests
- Error handling
- Edge cases (e.g., empty responses, network errors)
Adding these tests will improve the reliability and maintainability of the code.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 34-44: contrib/opbot/internal/client.go#L34-L44
Added lines #L34 - L44 were not covered by tests
[warning] 46-49: contrib/opbot/internal/client.go#L46-L49
Added lines #L46 - L49 were not covered by tests
[warning] 53-60: contrib/opbot/internal/client.go#L53-L60
Added lines #L53 - L60 were not covered by tests
[warning] 62-64: contrib/opbot/internal/client.go#L62-L64
Added lines #L62 - L64 were not covered by tests
[warning] 66-77: contrib/opbot/internal/client.go#L66-L77
Added lines #L66 - L77 were not covered by tests
[warning] 80-80: contrib/opbot/internal/client.go#L80
Added line #L80 was not covered by tests
[warning] 84-91: contrib/opbot/internal/client.go#L84-L91
Added lines #L84 - L91 were not covered by tests
[warning] 94-94: contrib/opbot/internal/client.go#L94
Added line #L94 was not covered by tests
_, err := ctx.Response().Reply(fmt.Sprintf("refund submitted with nonce %d", nonce)) | ||
if err != nil { | ||
log.Println(err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling when sending refund confirmation
If an error occurs when replying to the refund submission, the code currently logs the error but does not inform the user. Consider adding error handling or a retry mechanism to ensure the user is aware of any issues.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 306-309: contrib/opbot/botmd/commands.go#L306-L309
Added lines #L306 - L309 were not covered by tests
contrib/opbot/botmd/commands.go
Outdated
return | ||
} | ||
|
||
_, err = ctx.Response().Reply(fmt.Sprintf("refund submitted: %s", toExplorerSlackLink(status.TxHash().String()))) | ||
_, err = ctx.Response().Reply(fmt.Sprintf("refund submitted: %s", toTXSlackLink(status.TxHash().String(), rawRequest.OriginChainID))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle errors when sending refund transaction link
Similar to the previous point, if there's an error when sending the refund transaction link, consider handling it appropriately to enhance user experience and error transparency.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 313-313: contrib/opbot/botmd/commands.go#L313
Added line #L313 was not covered by tests
} | ||
|
||
func toExplorerSlackLink(ogHash string) string { | ||
rfqHash := strings.ToUpper(ogHash) | ||
// cut off 0x | ||
if strings.HasPrefix(rfqHash, "0x") { | ||
if strings.HasPrefix(rfqHash, "0X") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure consistent hash prefix handling in toExplorerSlackLink
The current code checks for the "0X" prefix but does not handle the lowercase "0x" prefix. Consider updating the condition to handle both cases to prevent inconsistent behavior with different hash formats.
Apply this diff to address the issue:
if strings.HasPrefix(rfqHash, "0X") || strings.HasPrefix(rfqHash, "0x") {
rfqHash = rfqHash[2:]
}
rfqHash = strings.ToLower(rfqHash)
Committable suggestion was skipped due to low confidence.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 356-356: contrib/opbot/botmd/commands.go#L356
Added line #L356 was not covered by tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (4)
contrib/opbot/internal/client.go (4)
16-18
: Consider using a more descriptive constant name.The constant
getRequestByTxHash
is used, but it might be more clear if it was named to indicate it's an API endpoint path.Consider renaming the constant to something like
getRequestByTxHashEndpoint
for improved clarity:- getRequestByTxHash = "/api/transaction-id/%s" + getRequestByTxHashEndpoint = "/api/transaction-id/%s"
28-50
: LGTM: rfqClientImpl and NewRFQClient are well-implemented.The rfqClientImpl struct and NewRFQClient function are correctly implemented. The function properly initializes the HTTP client with tracing and sets up the relayer clients.
Consider adding error handling for the case when no relayer URLs are provided:
func NewRFQClient(handler metrics.Handler, indexerURL string, relayerURLs []string) RFQClient { + if len(relayerURLs) == 0 { + panic("no relayer URLs provided") + } client := resty.New() // ... rest of the function }
52-81
: LGTM: GetRFQByTxID method is well-implemented.The GetRFQByTxID method correctly retrieves the quote request by transaction ID and handles errors appropriately. The status determination logic uses a switch statement as suggested in a previous review.
Consider extracting the status determination logic into a separate function for improved readability and reusability:
func determineStatus(res *GetRFQByTxIDResponse) string { switch { case res.BridgeClaim != (BridgeClaim{}): return "Claimed" case res.BridgeProof != (BridgeProof{}): return "Proven" case res.BridgeRelay != (BridgeRelay{}): return "Relayed" case res.BridgeRequest != (BridgeRequest{}): return "Requested" default: return "Unknown" } }Then, you can simplify the GetRFQByTxID method:
status := determineStatus(&res) return &res, status, nil
83-95
: LGTM: GetRFQByTxHash method is well-implemented.The GetRFQByTxHash method correctly iterates through all relayer clients and returns the first successful response. Error handling has been improved as suggested in a previous review.
Consider using a more descriptive error message when all clients fail:
- return nil, fmt.Errorf("could not get quote request by tx hash: %v", errs) + return nil, fmt.Errorf("failed to get quote request by tx hash from all relayer clients (%d): %v", len(errs), errs)This change provides more context about the number of failed attempts.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
- contrib/opbot/internal/client.go (1 hunks)
🧰 Additional context used
🔇 Additional comments (3)
contrib/opbot/internal/client.go (3)
1-14
: LGTM: Package declaration and imports are appropriate.The package name "internal" is suitable for an internal implementation. The imports include necessary standard library packages and relevant external dependencies for the RFQ client implementation.
20-26
: LGTM: RFQClient interface is well-defined.The RFQClient interface is clearly defined with two methods, GetRFQByTxID and GetRFQByTxHash. Each method has a descriptive comment explaining its purpose.
1-97
: Overall, the RFQ client implementation is well-structured and follows good practices.The code addresses previous review comments and implements proper error handling. Minor suggestions for improvements have been made in this review.
As mentioned in a previous review comment, it's important to improve test coverage for this file. Please add unit tests for the following:
NewRFQClient
functionGetRFQByTxID
methodGetRFQByTxHash
methodThese tests should cover various scenarios, including successful requests, error handling, and edge cases. To verify the current test coverage, run the following command:
This will help ensure the reliability and maintainability of the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
- contrib/opbot/botmd/commands.go (5 hunks)
🧰 Additional context used
🪛 GitHub Check: codecov/patch
contrib/opbot/botmd/commands.go
[warning] 167-170: contrib/opbot/botmd/commands.go#L167-L170
Added lines #L167 - L170 were not covered by tests
[warning] 179-201: contrib/opbot/botmd/commands.go#L179-L201
Added lines #L179 - L201 were not covered by tests
[warning] 203-213: contrib/opbot/botmd/commands.go#L203-L213
Added lines #L203 - L213 were not covered by tests
[warning] 216-218: contrib/opbot/botmd/commands.go#L216-L218
Added lines #L216 - L218 were not covered by tests
[warning] 243-243: contrib/opbot/botmd/commands.go#L243
Added line #L243 was not covered by tests
[warning] 306-309: contrib/opbot/botmd/commands.go#L306-L309
Added lines #L306 - L309 were not covered by tests
[warning] 313-313: contrib/opbot/botmd/commands.go#L313
Added line #L313 was not covered by tests
[warning] 352-352: contrib/opbot/botmd/commands.go#L352
Added line #L352 was not covered by tests
contrib/opbot/botmd/commands.go
Outdated
res, status, err := b.rfqClient.GetRFQByTxID(ctx.Context(), tx) | ||
if err != nil { | ||
b.logger.Errorf(ctx.Context(), "error fetching quote request: %v", err) | ||
_, err := ctx.Response().Reply(fmt.Sprintf("error fetching quote request %s", err.Error())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add unit tests for error handling in rfqLookupCommand
Lines 167-170 introduce new error handling when fetching the quote request using GetRFQByTxID
. These lines are not covered by unit tests. Consider adding tests to ensure this error handling behaves as expected and to improve code coverage.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 167-170: contrib/opbot/botmd/commands.go#L167-L170
Added lines #L167 - L170 were not covered by tests
if status == "Requested" { | ||
objects = append(objects, &slack.TextBlockObject{ | ||
Type: slack.MarkdownType, | ||
Text: "*DestTxHash*: not available", | ||
}) | ||
} else { | ||
//nolint: gosec | ||
objects = append(objects, &slack.TextBlockObject{ | ||
Type: slack.MarkdownType, | ||
Text: fmt.Sprintf("*DestTxHash*: %s", toTXSlackLink(res.BridgeRelay.TransactionHash, uint32(res.Bridge.DestChainID))), | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add unit tests for conditional handling of DestTxHash
Lines 203-213 introduce conditional logic to handle the presence or absence of DestTxHash
based on the status
. This conditional logic is not currently covered by unit tests. Consider adding tests to verify that the correct message is displayed for each possible status.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 203-213: contrib/opbot/botmd/commands.go#L203-L213
Added lines #L203 - L213 were not covered by tests
slackBlocks = append(slackBlocks, slack.NewSectionBlock(nil, objects, nil)) | ||
|
||
_, err = ctx.Response().ReplyBlocks(slackBlocks, slacker.WithUnfurlLinks(false)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle errors when sending Slack message blocks
When sending Slack message blocks (lines 216-218), if an error occurs, it is logged but not communicated to the user. Consider notifying the user when message delivery fails to enhance error transparency and user experience.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 216-218: contrib/opbot/botmd/commands.go#L216-L218
Added lines #L216 - L218 were not covered by tests
contrib/opbot/botmd/commands.go
Outdated
break | ||
} | ||
} | ||
rawRequest, err := b.rfqClient.GetRFQByTxHash(ctx.Context(), tx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add unit tests for fetching quote request in rfqRefund
Line 243 adds a call to GetRFQByTxHash
in the rfqRefund
command. This line is not covered by unit tests. Adding tests will help ensure that fetching the quote request and error handling work correctly.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 243-243: contrib/opbot/botmd/commands.go#L243
Added line #L243 was not covered by tests
Bundle ReportChanges will decrease total bundle size by 3.91MB (-10.96%) ⬇️. This is within the configured threshold ✅ Detailed changes
ℹ️ *Bundle size includes cached data from a previous commit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
🧹 Outside diff range and nitpick comments (28)
packages/rfq-indexer/indexer/.env.example (1)
11-11
: LGTM! Consider adding a brief comment for the new RPC.The addition of
WORLDCHAIN_MAINNET_RPC
is consistent with the existing naming convention and structure. It's great to see support for a new network being added.To improve documentation, consider adding a brief comment about the Worldchain network, similar to the Alchemy recommendation for other RPCs:
LINEA_MAINNET_RPC= BNB_MAINNET_RPC= +# Worldchain Mainnet RPC URL WORLDCHAIN_MAINNET_RPC=
packages/rfq-indexer/indexer/src/utils/formatAmount.ts (2)
3-8
: LGTM with suggestions for improvementThe introduction of
ADDRESSES_WITH_18_DECIMALS
is a good approach to handle multiple tokens with 18 decimals. However, consider the following suggestions:
- Add comments to explain why each address is included, especially for the multiple WLD addresses.
- Consider using a
Set
instead of anArray
for potentially faster lookups, especially if the list grows larger.Could you clarify why there are multiple WLD addresses? Are these for different networks or contract versions?
11-13
: Improved flexibility, but consider edge casesThe updated
formatAmount
function now handles multiple tokens with 18 decimals, which is an improvement. However, there are a couple of points to consider:
- Error handling: Consider adding a check for invalid
tokenAddress
input.- Default decimals: The function defaults to 6 decimals for tokens not in the list. Is this appropriate for all other tokens in your system?
Consider adding error handling and reviewing the default decimal value:
export function formatAmount(amount: bigint, tokenAddress: string): string { if (typeof tokenAddress !== 'string' || !tokenAddress.match(/^0x[a-fA-F0-9]{40}$/)) { throw new Error('Invalid token address'); } const decimals = ADDRESSES_WITH_18_DECIMALS.includes(tokenAddress.toLowerCase()) ? 18 : 6; return formatUnits(amount, decimals); }Also, consider creating a more comprehensive mapping of token addresses to their decimal places if 6 is not a suitable default for all other tokens.
packages/rfq-indexer/api/src/queries/depositsQueries.ts (1)
Line range hint
24-28
: Consider improving maintainability and clarity.
The hardcoded timestamp
1722729600
in the where clause might become outdated. Consider using a configurable value or a relative time calculation.The comment at the end of the function provides valuable context. Consider moving it above the function and expanding it slightly for better clarity. For example:
// Limit lookback to prevent data issues from in-flight relays when the index is partially loaded. // This also may improve query performance. export const qDeposits = () => { // ... (function body) }packages/rfq-indexer/api/src/routes/disputesRoute.ts (1)
Resolve inconsistency in status codes for the
/disputes
endpointThe controller uses a single 200 status code for both scenarios—when disputes are found and when no disputes are found. However, the OpenAPI documentation currently defines two separate 200 status codes with different descriptions, which is redundant and may confuse API consumers.
To improve clarity and maintain consistency with the implementation:
- Consolidate the 200 status codes into a single response.
- Differentiate the response content to indicate whether disputes are present or not.
Category:
🔗 Analysis chain
Line range hint
44-51
: Resolve inconsistency in status codes for the/disputes
endpointThe current OpenAPI documentation has two 200 status codes with different descriptions, which is inconsistent and may lead to confusion for API consumers.
To improve clarity and maintain RESTful API best practices, consider the following suggestions:
- Merge the two 200 status codes into a single response.
- Use the response body to differentiate between "disputes found" and "no disputes found" scenarios.
Here's a suggested update to the OpenAPI documentation:
responses: description: Successful response content: application/json: schema: oneOf: - type: array items: $ref: '#/components/schemas/Dispute' - type: object properties: message: type: string example: "No disputes found"To ensure consistency between the documentation and implementation, please review the
disputesController
:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check the implementation of disputesController # Expected: The controller should return a 200 status for both cases (disputes found and no disputes found) rg --type typescript -A 10 "export const disputesController" packages/rfq-indexer/api/src/controllers/Length of output: 139
Script:
#!/bin/bash # Description: Check the implementation of disputesController with correct file type # Expected: The controller should return a 200 status for both cases (disputes found and no disputes found) rg --type ts -A 10 "export const disputesController" packages/rfq-indexer/api/src/controllers/Length of output: 1141
Script:
#!/bin/bash # Description: Extract all res.status calls in disputesController.ts # Expected: All lines where res.status is called, to verify status codes returned rg "res\.status" packages/rfq-indexer/api/src/controllers/disputesController.tsLength of output: 212
packages/rfq-indexer/api/src/routes/invalidRelaysRoute.ts (1)
Line range hint
44-53
: Approve the status code change, but suggest documentation improvementThe change from a 404 to a 200 status code for the case when no recent invalid relays are found is a good improvement. It aligns better with RESTful principles and makes the API more consistent.
However, there's now a duplicate
200
status code in the documentation, which might cause confusion. Consider merging the two200
responses into a single entry to improve clarity.Here's a suggested improvement for the OpenAPI documentation:
responses: description: Successful response (including when no recent invalid relays are found) content: application/json: schema: oneOf: - type: array items: type: object properties: Bridge: type: object description: General transaction fields BridgeRequest: type: object description: Deposit information BridgeRelay: type: object description: Relay information BridgeRefund: type: object description: Refund information BridgeProof: type: object description: Proof information (if available) BridgeClaim: type: object description: Claim information (if available) BridgeDispute: type: object description: Dispute information (if available) - type: object properties: message: type: string example: "No recent invalid relays found" description: Server error content: application/json: schema: type: object properties: message: type: stringThis structure clearly indicates that a 200 status can return either an array of invalid relays or a message when no relays are found.
packages/rfq-indexer/api/src/routes/refundedAndRelayedRoute.ts (1)
Line range hint
44-58
: Approve API behavior change, but OpenAPI documentation needs adjustmentThe change from a 404 to a 200 status code for the case when no transactions are found is a good practice in RESTful API design. However, the OpenAPI documentation now has two entries for the 200 status code, which is incorrect and may cause issues with API documentation tools.
To fix this, please combine the two 200 status descriptions. Here's a suggested correction:
* 200: -* description: Successful response +* description: Successful response (including when no transactions are found) * content: * application/json: * schema: -* type: array -* items: - * type: object - * properties: - * Bridge: - * type: object - * description: General transaction fields - * BridgeRequest: - * type: object - * description: Deposit information - * BridgeRelay: - * type: object - * description: Relay information - * BridgeRefund: - * type: object - * description: Refund information - * BridgeProof: - * type: object - * description: Proof information (if available) - * BridgeClaim: - * type: object - * description: Claim information (if available) - * BridgeDispute: - * type: object - * description: Dispute information (if available) -* 200: -* description: No refunded and relayed transactions found -* content: -* application/json: -* schema: * type: object * properties: - * message: + * data: + * type: array + * items: + * type: object + * properties: + * Bridge: + * type: object + * description: General transaction fields + * BridgeRequest: + * type: object + * description: Deposit information + * BridgeRelay: + * type: object + * description: Relay information + * BridgeRefund: + * type: object + * description: Refund information + * BridgeProof: + * type: object + * description: Proof information (if available) + * BridgeClaim: + * type: object + * description: Claim information (if available) + * BridgeDispute: + * type: object + * description: Dispute information (if available) + * message: * type: string +* description: Informational message (e.g., "No refunded and relayed transactions found")This change ensures that the OpenAPI documentation correctly represents the API's behavior while maintaining a single 200 status code entry.
packages/rfq-indexer/api/src/routes/transactionIdRoute.ts (2)
Line range hint
49-57
: Reconsider merging 404 status into 200 responseThe current changes merge the 404 "Transaction not found" response into the 200 status code. This approach goes against RESTful API best practices and may lead to several issues:
- It can confuse API consumers who expect a 404 status for not found resources.
- It complicates error handling for API clients, as they now need to parse the response body to determine if the resource was found or not.
- It creates an inconsistent response structure, where a 200 status could indicate either a successful retrieval or a not found scenario.
Consider reverting this change and maintaining separate 200 and 404 status codes for clarity and adherence to RESTful principles. Here's a suggested structure:
* 200: -* description: Transaction not found +* description: Successful response * content: * application/json: * schema: * type: object * properties: -* message: -* type: string +* // ... (keep the existing properties) +* 404: +* description: Transaction not found +* content: +* application/json: +* schema: +* type: object +* properties: +* message: +* type: stringThis structure maintains a clear distinction between successful and not found responses, improving API usability and client-side error handling.
BridgeDispute Property Missing in getTransactionById Implementation
The
BridgeDispute
property is defined in the response schema but is not included in thegetTransactionById
function withintransactionIdController.ts
. This omission may result in incomplete data being returned to API consumers.
- Action Required: Update the
getTransactionById
function to include and populate theBridgeDispute
property in its response.🔗 Analysis chain
Line range hint
43-45
: Verify implementation and consider adding examples for BridgeDisputeThe addition of the
BridgeDispute
property to the response schema is a good improvement for providing dispute information. To enhance the documentation further:
- Ensure that the
getTransactionById
controller function intransactionIdController.ts
has been updated to include this new property in the response.- Consider adding an example response in the OpenAPI documentation to illustrate the structure of the
BridgeDispute
object.To verify the implementation, you can run the following command:
If the implementation is confirmed, consider adding an example response to the OpenAPI documentation:
* BridgeDispute: * type: object * description: Dispute information (if available) +* example: +* disputeId: "123456" +* status: "open" +* createdAt: "2023-05-01T12:00:00Z"This addition would provide API consumers with a clearer understanding of the
BridgeDispute
structure.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if the getTransactionById function includes BridgeDispute in its response rg -A 10 "function getTransactionById" packages/rfq-indexer/api/src/controllers/transactionIdController.tsLength of output: 108
Script:
#!/bin/bash # Search for the BridgeDispute property in transactionIdController.ts rg "BridgeDispute" packages/rfq-indexer/api/src/controllers/transactionIdController.tsLength of output: 86
Script:
#!/bin/bash # Search for the BridgeDispute property in the entire api/src directory rg "BridgeDispute" packages/rfq-indexer/api/src/Length of output: 806
packages/rfq-indexer/indexer/ponder.schema.ts (1)
8-8
: Approved, but consider adding documentation for clarity.The addition of the
request
field to theBridgeRequestEvents
table is consistent with the existing schema structure. However, to improve code maintainability and clarity, consider adding a comment explaining the purpose and expected content of this field.Example documentation:
// The serialized bridge request data request: p.string(),packages/rfq-indexer/api/src/controllers/pendingTransactionsController.ts (1)
124-125
: Approve consistent status code change and suggest refactoring.The status code change from 404 to 200 when no pending transactions missing relay are found is consistent with previous controllers. This improves API consistency across endpoints.
Consider refactoring the common response logic into a shared function to reduce code duplication across controllers.
Here's a suggested refactoring:
function sendNoResultsResponse(res: Response, message: string) { res.status(200).json({ message }); } // Usage in controllers: // sendNoResultsResponse(res, 'No pending transactions missing relay found');packages/rfq-indexer/api/src/routes/pendingTransactionsRoute.ts (1)
170-170
: Fix typo in '/exceed-deadline' endpoint descriptionThere's a small typo in the description for the '/exceed-deadline' endpoint.
Please apply the following change:
- * description: No pending transactionst that exceed the deadline found + * description: No pending transactions that exceed the deadline foundservices/explorer/graphql/client/queries/queries.graphql (1)
158-158
: LGTM. Consider updating documentation.The addition of the
worldchain
field to theGetDailyStatisticsByChain
query response is consistent with the existing structure and aligns with the PR objectives. This change enhances the query's capabilities by including statistics for the "worldchain".Please ensure that any relevant documentation is updated to reflect this new field. Also, consider notifying API consumers about this addition, as it might affect clients that rely on a specific order of fields in the response.
packages/rfq-indexer/indexer/ponder.config.ts (2)
73-78
: LGTM: Worldchain configuration addedThe worldchain configuration is consistent with other chain configurations. Good use of environment variable for the RPC endpoint.
Consider clarifying the comment for
FastBridgeV2StartBlock
. It's marked as both "first block and new block", which might be confusing. If it's truly the first block where the contract was deployed, you might want to remove the "and new block" part.
199-203
: LGTM: Worldchain added to Ponder configThe addition of worldchain to both the networks and contracts configurations in the Ponder config is consistent with other chain configurations. Good use of optional chaining in the contracts configuration.
Consider reviewing the commented out
disableCache
option. If it's not needed for any chains, you might want to remove these comments across all chain configurations for cleaner code.Also applies to: 240-244
packages/explorer-ui/components/ChainChart/index.tsx (1)
323-328
: LGTM! Consider adding a comment for consistency.The addition of the "worldchain" bar is implemented correctly and follows the existing pattern for other chains. The conditional fill color based on the loading state is properly handled.
For consistency with other chains, consider adding a brief comment above this Bar component to indicate what "worldchain" represents, similar to how you might document it for other chains if they were being added.
+ {/* WorldChain - Add a brief description here */} <Bar isAnimationActive={false} dataKey="worldchain" stackId="a" fill={loading ? 'rgba(255, 255, 255, 0.1)' : '#FFFFFF'} />
services/explorer/consumer/parser/tokendata/cache.go (1)
290-295
: LGTM! Consider grouping tokens by chain ID for improved readability.The new token entries for WLD, USDC.e, and WETH are correctly added to the
tokenDataMap
. The entries follow the existing pattern and provide the necessary information (symbol, decimals, and address) for each token.To improve code readability, consider grouping the tokens by chain ID. This would make it easier to identify which tokens are available on each chain at a glance.
Here's a suggested regrouping:
var tokenDataMap = map[string]TokenData{ // Ethereum (Chain ID: 1) "1_0x5f98805A4E8be255a32880FDeC7F6728C6568bA0": {"LUSD", 18, "0x5f98805A4E8be255a32880FDeC7F6728C6568bA0"}, "1_0xf939E0A03FB07F59A73314E73794Be0E57ac1b4E": {"crvUSD", 18, "0xf939E0A03FB07F59A73314E73794Be0E57ac1b4E"}, "1_0x853d955acef822db058eb8505911ed77f175b99e": {"FRAX", 18, "0x853d955acef822db058eb8505911ed77f175b99e"}, "1_0xAdF7C35560035944e805D98fF17d58CDe2449389": {"SPEC", 18, "0xAdF7C35560035944e805D98fF17d58CDe2449389"}, "1_0x163f8c2467924be0ae7b5347228cabf260318753": {"WLD", 18, "0x163f8c2467924be0ae7b5347228cabf260318753"}, // Optimism (Chain ID: 10) "10_0x8c6f28f2F1A3C87F0f938b96d27520d9751ec8d9": {"sUSD", 18, "0x8c6f28f2F1A3C87F0f938b96d27520d9751ec8d9"}, "10_0xdC6fF44d5d932Cbd77B52E5612Ba0529DC6226F1": {"WLD", 18, "0xdC6fF44d5d932Cbd77B52E5612Ba0529DC6226F1"}, // Polygon (Chain ID: 137) "137_0x45c32fA6DF82ead1e2EF74d17b76547EDdFaFF89": {"FRAX", 18, "0x45c32fA6DF82ead1e2EF74d17b76547EDdFaFF89"}, // Arbitrum One (Chain ID: 42161) "42161_0x17FC002b466eEc40DaE837Fc4bE5c67993ddBd6F": {"FRAX", 18, "0x17FC002b466eEc40DaE837Fc4bE5c67993ddBd6F"}, // Base (Chain ID: 8453) "8453_0x417Ac0e078398C154EdFadD9Ef675d30Be60Af93": {"crvUSD", 18, "0x417Ac0e078398C154EdFadD9Ef675d30Be60Af93"}, "8453_0xd9aAEc86B65D86f6A7B5B1b0c42FFA531710b6CA": {"USDbC", 6, "0xd9aAEc86B65D86f6A7B5B1b0c42FFA531710b6CA"}, "8453_0x50c5725949A6F0c72E6C4a641F24049A917DB0Cb": {"DAI", 18, "0x50c5725949A6F0c72E6C4a641F24049A917DB0Cb"}, "8453_0x96419929d7949D6A801A6909c145C8EEf6A40431": {"SPEC", 18, "0x96419929d7949D6A801A6909c145C8EEf6A40431"}, // Zora (Chain ID: 480) "480_0x2cFc85d8E48F8EAB294be644d9E25C3030863003": {"WLD", 18, "0x2cFc85d8E48F8EAB294be644d9E25C3030863003"}, "480_0x79A02482A880bCE3F13e09Da970dC34db4CD24d1": {"USDC.e", 6, "0x79A02482A880bCE3F13e09Da970dC34db4CD24d1"}, "480_0x4200000000000000000000000000000000000006": {"WETH", 18, "0x4200000000000000000000000000000000000006"}, }contrib/opbot/botmd/commands.go (3)
160-170
: Improved command description and error handlingThe updated description accurately reflects the new data source. The error handling is appropriate, but consider adding a nil check for
b.logger
to prevent potential nil pointer dereferences.if b.logger != nil { b.logger.Errorf(ctx.Context(), "error fetching quote request: %v", err) }
Line range hint
297-312
: Improved submission status retrieval with potential overflow riskThe new implementation using
b.submitter.GetSubmissionStatus
provides better error handling and retry logic. However, there's a potential integer overflow risk when convertingrawRequest.Bridge.OriginChainID
toint64
on line 297.Consider using a safer conversion method:
- status, err = b.submitter.GetSubmissionStatus(ctx, big.NewInt(int64(rawRequest.Bridge.OriginChainID)), nonce) + status, err = b.submitter.GetSubmissionStatus(ctx, new(big.Int).SetUint64(uint64(rawRequest.Bridge.OriginChainID)), nonce)This change ensures that the conversion is safe for all possible uint32 values.
🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 316-316:
G115: integer overflow conversion int -> uint32 (gosec)
Line range hint
324-340
: Updated makeFastBridge function with potential overflow riskThe
makeFastBridge
function has been successfully updated to use the newinternal.GetRFQByTxIDResponse
type. However, there's a potential integer overflow risk when convertingreq.Bridge.OriginChainID
toint
on line 335.Consider using a safer conversion method:
- chainClient, err := b.rpcClient.GetChainClient(ctx, int(req.Bridge.OriginChainID)) + chainClient, err := b.rpcClient.GetChainClient(ctx, int64(req.Bridge.OriginChainID))Ensure that the
GetChainClient
method can handleint64
values. If not, implement a safe casting function to handle potential overflows.🧰 Tools
🪛 GitHub Check: Lint (contrib/opbot)
[failure] 340-340:
G115: integer overflow conversion int -> uint32 (gosec)
[failure] 335-335:
unnecessary conversion (unconvert)services/explorer/graphql/client/client.go (1)
109-133
: LGTM! Consider adding a comment for the new Worldchain field.The addition of the
Worldchain
field to theGetDailyStatisticsByChain
struct is correct and consistent with the existing structure. The use of a pointer tofloat64
and the JSON/GraphQL tags are appropriate.To improve code readability, consider adding a brief comment explaining what the Worldchain represents, similar to other fields if they have comments.
services/explorer/graphql/server/graph/partials.go (5)
404-405
: LGTM! Consider adding a comment for the new worldchain entry.The addition of the worldchain entry is correct and follows the existing pattern. To improve code readability and maintainability, consider adding a comment explaining the significance of chain ID 480 or the worldchain network.
results[534352] AS scroll, results[59144] AS linea, + -- Worldchain network (chain ID: 480) results[480] AS worldchain,
449-450
: LGTM! Consider adding a comment for consistency.The addition of the worldchain entry is correct and consistent with the previous modification. For consistency and improved readability, consider adding a comment here as well.
results[59144] AS linea, + -- Worldchain network (chain ID: 480) results[480] AS worldchain,
546-547
: LGTM! Maintain consistency with comments.The addition of the worldchain entry is correct and consistent with the previous modifications. To maintain consistency across all similar changes, consider adding a comment here as well.
results[59144] AS linea, + -- Worldchain network (chain ID: 480) results[480] AS worldchain,
681-682
: LGTM! Maintain comment consistency.The addition of the worldchain entry is correct and consistent with the previous modifications. For uniformity across all similar changes, consider adding a comment here as well.
results[59144] AS linea, + -- Worldchain network (chain ID: 480) results[480] AS worldchain,
714-715
: LGTM! Maintain comment consistency and consider a global update.The addition of the worldchain entry is correct and consistent with all previous modifications. As before, consider adding a comment for clarity:
results[59144] AS linea, + -- Worldchain network (chain ID: 480) results[480] AS worldchain,
Given that this change has been applied consistently across multiple SQL query constants, it might be beneficial to consider a more maintainable approach for future additions. You could explore using a centralized list of chain IDs and their corresponding names, which could be used to generate these query parts dynamically. This would reduce repetition and make future updates easier to manage.
services/explorer/graphql/server/graph/resolver/server.go (2)
103-127
: LGTM! Consider alphabetical ordering of fields.The addition of the
Worldchain
field to theDateResultByChain
struct is consistent with the existing pattern and maintains type consistency.For improved readability and easier maintenance, consider ordering the fields alphabetically. This would place
Worldchain
beforeTotal
:Scroll func(childComplexity int) int Total func(childComplexity int) int - Worldchain func(childComplexity int) int + Worldchain func(childComplexity int) int
4844-4884
: LGTM! Consider enhancing error handling.The implementation of
_DateResultByChain_worldchain
andfieldContext_DateResultByChain_worldchain
functions is consistent with the existing code structure and follows good practices for error handling and recovery.Consider enhancing the error handling in the
fieldContext_DateResultByChain_worldchain
function to provide more specific error messages:Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { - return nil, errors.New("field of type Float does not have child fields") + return nil, fmt.Errorf("field 'worldchain' of type Float does not have child fields") },This change would make debugging easier by providing more context in the error message.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (38)
- contrib/opbot/botmd/commands.go (10 hunks)
- contrib/opbot/internal/model.go (1 hunks)
- packages/explorer-ui/components/ChainChart/index.tsx (1 hunks)
- packages/rfq-indexer/api/src/constants/networkConfig.ts (2 hunks)
- packages/rfq-indexer/api/src/controllers/conflictingProofsController.ts (1 hunks)
- packages/rfq-indexer/api/src/controllers/disputesController.ts (1 hunks)
- packages/rfq-indexer/api/src/controllers/pendingTransactionsController.ts (7 hunks)
- packages/rfq-indexer/api/src/controllers/transactionIdController.ts (2 hunks)
- packages/rfq-indexer/api/src/graphql/resolvers.ts (1 hunks)
- packages/rfq-indexer/api/src/graphql/types/events.graphql (1 hunks)
- packages/rfq-indexer/api/src/queries/depositsQueries.ts (1 hunks)
- packages/rfq-indexer/api/src/routes/conflictingProofsRoute.ts (1 hunks)
- packages/rfq-indexer/api/src/routes/disputesRoute.ts (1 hunks)
- packages/rfq-indexer/api/src/routes/invalidRelaysRoute.ts (1 hunks)
- packages/rfq-indexer/api/src/routes/pendingTransactionsRoute.ts (4 hunks)
- packages/rfq-indexer/api/src/routes/refundedAndRelayedRoute.ts (1 hunks)
- packages/rfq-indexer/api/src/routes/transactionIdRoute.ts (1 hunks)
- packages/rfq-indexer/api/src/types/index.ts (1 hunks)
- packages/rfq-indexer/indexer/.env.example (1 hunks)
- packages/rfq-indexer/indexer/ponder.config.ts (5 hunks)
- packages/rfq-indexer/indexer/ponder.schema.ts (1 hunks)
- packages/rfq-indexer/indexer/src/index.ts (2 hunks)
- packages/rfq-indexer/indexer/src/utils/chains.ts (1 hunks)
- packages/rfq-indexer/indexer/src/utils/formatAmount.ts (1 hunks)
- services/explorer/api/server_test.go (1 hunks)
- services/explorer/consumer/parser/rfqparser.go (1 hunks)
- services/explorer/consumer/parser/tokendata/cache.go (1 hunks)
- services/explorer/graphql/client/client.go (2 hunks)
- services/explorer/graphql/client/queries/queries.graphql (1 hunks)
- services/explorer/graphql/server/graph/model/models_gen.go (1 hunks)
- services/explorer/graphql/server/graph/partials.go (6 hunks)
- services/explorer/graphql/server/graph/resolver/server.go (6 hunks)
- services/explorer/graphql/server/graph/schema/types.graphql (1 hunks)
- services/explorer/static/chainIDs.yaml (1 hunks)
- services/explorer/static/tokenIDToCoinGeckoID.yaml (1 hunks)
- services/explorer/static/tokenSymbolToCoinGeckoID.yaml (1 hunks)
- services/explorer/static/tokenSymbolToTokenID.yaml (1 hunks)
- services/rfq/relayer/quoter/quoter_test.go (1 hunks)
✅ Files skipped from review due to trivial changes (3)
- packages/rfq-indexer/api/src/controllers/conflictingProofsController.ts
- packages/rfq-indexer/api/src/controllers/disputesController.ts
- packages/rfq-indexer/api/src/controllers/transactionIdController.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- contrib/opbot/internal/model.go
🧰 Additional context used
🪛 GitHub Check: Lint (contrib/opbot)
contrib/opbot/botmd/commands.go
[failure] 316-316:
G115: integer overflow conversion int -> uint32 (gosec)
[failure] 340-340:
G115: integer overflow conversion int -> uint32 (gosec)
[failure] 335-335:
unnecessary conversion (unconvert)
🔇 Additional comments (30)
packages/rfq-indexer/indexer/src/utils/chains.ts (1)
10-10
: Verify the new chain entry and consider adding documentation.The new entry
480: 'worldchain'
is consistent in format with other entries in thechainIdToName
object. However, 'worldchain' is not a widely recognized blockchain name. Please verify the following:
- Confirm that 480 is the correct chain ID for 'worldchain'.
- Ensure 'worldchain' is the correct name and not a typo.
- Consider adding a comment or documentation explaining the purpose or source of this chain, as it's not commonly known.
This addition will cause
getChainName(480)
to return 'worldchain' instead of 'unknown'.To help verify the existence and correctness of this chain, you can run the following script:
✅ Verification successful
Chain entry
480: 'worldchain'
is verified and consistent across the codebase.No issues found with the addition. The chain ID
480
and name'worldchain'
are correctly referenced in multiple parts of the project.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Search for other occurrences of 'worldchain' or chain ID 480 in the codebase echo "Searching for 'worldchain':" rg --type typescript --type javascript 'worldchain' echo "\nSearching for chain ID 480:" rg --type typescript --type javascript '\b480\b'Length of output: 312
Script:
#!/bin/bash # Description: Search for other occurrences of 'worldchain' or chain ID 480 in the codebase echo "Searching for 'worldchain':" rg --type ts --type js 'worldchain' echo "\nSearching for chain ID 480:" rg --type ts --type js '\b480\b'Length of output: 14522
packages/rfq-indexer/indexer/.env.example (1)
Line range hint
1-14
: File structure and consistency maintained.The addition of the new RPC URL maintains the file's overall structure and consistency. The blank lines and existing comments have been preserved, ensuring the file remains well-organized and readable.
services/explorer/static/chainIDs.yaml (1)
22-22
: New chain ID added: Verify Worldchain detailsThe addition of Worldchain (ID: 480) to the chainIDs.yaml file looks correct in terms of formatting and consistency with other entries. However, please ensure the following:
- Verify that 480 is indeed the correct chain ID for Worldchain.
- Confirm that Worldchain is the official or commonly accepted name for this blockchain.
- Consider if any systems or components relying on this file need to be updated to recognize the new chain.
To ensure the correctness of this addition, please run the following verification:
This script will help identify any potential issues or inconsistencies with the new addition.
✅ Verification successful
New chain ID added: Worldchain (ID: 480) Verified
The addition of Worldchain with chain ID 480 has been successfully verified. No duplicate IDs were found, and the entry format is consistent with existing entries. Additionally, multiple references to Worldchain across the codebase confirm its proper integration.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify the new chain ID doesn't conflict with existing entries and is correctly formatted # Check for duplicate chain IDs duplicate_ids=$(sort services/explorer/static/chainIDs.yaml | uniq -d) if [ -n "$duplicate_ids" ]; then echo "Warning: Duplicate chain IDs found:" echo "$duplicate_ids" fi # Verify the format of the new entry new_entry=$(grep "480:" services/explorer/static/chainIDs.yaml) if [[ ! "$new_entry" =~ ^480:\ \'Worldchain\'$ ]]; then echo "Warning: New entry format might be incorrect. Please check:" echo "$new_entry" fi # Check if Worldchain is mentioned in other project files (potential integration points) mentions=$(rg -i "worldchain" --type yaml --type go --type js --type ts) if [ -n "$mentions" ]; then echo "Info: Other mentions of Worldchain found in the project:" echo "$mentions" fiLength of output: 21359
services/explorer/static/tokenSymbolToTokenID.yaml (1)
45-45
: LGTM! New token mapping added correctly.The new entry for 'wld' (Worldcoin) has been added following the existing format and style conventions of the file.
To ensure the accuracy of the new token ID, please run the following verification script:
This script uses the GitHub GraphQL API to query for the token with ID 'worldcoin'. If it returns valid data, it confirms that the token ID is correct in the system.
services/explorer/static/tokenIDToCoinGeckoID.yaml (1)
43-43
: LGTM! Consider verifying the CoinGecko ID.The new entry for the 'WLD' token follows the correct format and appears to be accurate. The mapping of 'WLD' to 'worldcoin' is logical and consistent with the existing entries.
To ensure the accuracy of the CoinGecko ID, please run the following verification script:
This script will help confirm that the CoinGecko ID 'worldcoin' indeed corresponds to the Worldcoin token (WLD).
packages/rfq-indexer/api/src/graphql/types/events.graphql (1)
20-20
: New field added to BridgeRequestEvent: Verify impact and usageThe addition of the
request: String!
field to theBridgeRequestEvent
type seems to align with the PR objective of refactoring opbot to use the GraphQL API. This new field could potentially store serialized request data or a request identifier.Consider the following points:
- Ensure that this field is populated correctly in the corresponding resolver.
- Update any existing queries or mutations that use the
BridgeRequestEvent
type to include this new field if necessary.- Verify that the opbot component is updated to handle this new field in its logic.
- Consider adding a comment to explain the purpose and content of this field for better documentation.
To ensure this change is properly integrated, please run the following script:
packages/rfq-indexer/api/src/routes/conflictingProofsRoute.ts (1)
Line range hint
44-52
: Consider revising the API response structure for clarity and consistency.The addition of a second 200 status code for the case when no conflicting proofs are found is unconventional and may lead to confusion. Here are some points to consider:
- Having two 200 responses with different schemas is not typical in RESTful API design and may cause issues with API consumers.
- The removal of the 404 status (as mentioned in the AI summary) might not be the best approach, as it's a standard way to indicate that no resources were found.
Consider the following alternatives:
- Use a single 200 response with an array that can be empty when no conflicting proofs are found.
- Reintroduce the 404 status for the case when no conflicting proofs are found, as this aligns better with RESTful practices.
Here's a suggested revision:
* 200: * description: Successful response * content: * application/json: * schema: * type: array * items: * type: object * properties: * Bridge: * type: object * description: General transaction fields * BridgeRequest: * type: object * description: Deposit information * BridgeRelay: * type: object * description: Relay information * BridgeRefund: * type: object * description: Refund information * BridgeProof: * type: object * description: Proof information (if available) * BridgeClaim: * type: object * description: Claim information (if available) * BridgeDispute: * type: object * description: Dispute information (if available) -* 200: -* description: No conflicting proofs found -* content: -* application/json: -* schema: -* type: object -* properties: -* message: -* type: string +* 404: +* description: No conflicting proofs found +* content: +* application/json: +* schema: +* type: object +* properties: +* message: +* type: string * 500: * description: Server error * content: * application/json: * schema: * type: object * properties: * message: * type: stringThis change maintains a clear distinction between successful responses with data (200) and cases where no data is found (404), adhering to RESTful API conventions.
To ensure consistency across the codebase, let's check for similar patterns in other route files:
This will help us verify if the suggested change aligns with the existing API documentation patterns in the project.
packages/rfq-indexer/api/src/constants/networkConfig.ts (3)
11-11
: LGTM: Import statement addition is correct.The addition of
worldchain
to the import statement is consistent with the existing code style and necessary for the new network configuration.
118-128
: LGTM: New network configuration entry is structurally correct.The new entry for Worldchain (chain ID 480) follows the established pattern for network configurations in this file. The use of
createPublicClient
and the overall structure are consistent with other network entries.
118-118
: Verify the chain ID for Worldchain.Please confirm that 480 is the correct chain ID for Worldchain. It's important to ensure that this ID is unique and officially assigned to the Worldchain network.
#!/bin/bash # Description: Check if the chain ID 480 is unique in the configuration echo "Occurrences of chain ID 480 in the configuration:" grep -n "480:" packages/rfq-indexer/api/src/constants/networkConfig.ts echo "\nAll chain IDs in the configuration:" grep -oP '^\s*\K\d+(?=:)' packages/rfq-indexer/api/src/constants/networkConfig.ts | sort -npackages/rfq-indexer/api/src/controllers/pendingTransactionsController.ts (4)
41-42
: Approve status code change for consistency.Changing the status code from 404 to 200 when no pending transactions are found is a good practice. This improves API consistency and allows clients to distinguish between "no results" and "endpoint not found" scenarios more easily.
79-80
: Approve consistent status code change.Good job maintaining consistency by changing the status code from 404 to 200 when no pending transactions missing proof are found. This change aligns with the previous controller and improves overall API consistency.
4-4
: Approve import statement cleanup.Good job on removing the unused import. This helps keep the code clean and improves maintainability.
Let's verify that
qDisputes
is indeed unused in this file:#!/bin/bash # Description: Check if qDisputes is used in the file # Test: Search for qDisputes usage rg --type typescript 'qDisputes' packages/rfq-indexer/api/src/controllers/pendingTransactionsController.ts
169-170
: Approve consistent status code change and suggest comprehensive testing.The status code change from 404 to 200 when no pending transactions are found is consistent with all previous controllers. This completes the standardization of the API response pattern across all endpoints in this file.
To ensure this consistency is maintained:
- Implement the suggested refactoring from the previous comment.
- Create a comprehensive test suite that verifies the behavior of all endpoints, especially for the "no results" scenario.
Let's check for existing tests:
If no tests are found, consider creating them to ensure the consistency of these changes across all endpoints.
packages/rfq-indexer/indexer/src/index.ts (1)
19-19
: Approved: Enhanced bridge request tracking with newrequest
property.The addition of the
request
property to theFastBridgeV2:BridgeRequested
event data structure improves the detail of information captured during bridge requests. This change enhances the system's ability to track and analyze bridge requests.However, I have a few suggestions to improve clarity and ensure smooth integration:
Please add a comment explaining the purpose and structure of the
request
object. This will help future developers understand its significance and usage.Verify if any database schema changes are required to accommodate this new property. Ensure that the database can properly store and query the
request
data.To verify the impact of this change, please run the following script:
This script will help identify any other files that might need to be updated to handle the new
request
property, as well as any database migrations that might be related to this change.Also applies to: 37-37
packages/rfq-indexer/api/src/routes/pendingTransactionsRoute.ts (1)
Line range hint
49-54
: Reconsider changing 404 status codes to 200 for "not found" scenariosThe changes consistently replace 404 status codes with 200 status codes for cases where no pending transactions are found across all four endpoints (/missing-claim, /missing-proof, /missing-relay, /exceed-deadline). While this approach provides consistency, it deviates from RESTful API best practices and may lead to several issues:
- It breaks the semantic meaning of HTTP status codes, where 404 is typically used for "not found" scenarios.
- It may confuse API consumers who expect a 404 status for "not found" cases.
- This change could break existing client implementations that rely on 404 status codes.
- It potentially masks actual errors and could make debugging more difficult.
Consider the following alternatives:
- Revert to using 404 status codes for "not found" scenarios, maintaining RESTful best practices.
- If the goal is to provide more detailed information, consider using a 200 status code with a structured response that includes a boolean field indicating whether items were found, along with the actual data or a message.
Example for option 2:
{ "found": false, "message": "No pending transactions missing claim found", "data": [] }This approach maintains the 200 status code while providing clear information about the result.
To assess the impact of this change, please run the following script to check for any client code that might be affected:
This script will help identify any client-side code that may need to be updated if we proceed with this change.
Also applies to: 90-95, 129-134, 168-173
packages/rfq-indexer/indexer/ponder.config.ts (2)
16-16
: LGTM: New chain ID constant addedThe addition of
worldchainChainId
constant is consistent with the existing pattern for other chain IDs in the file.
147-154
: LGTM: Worldchain added to networkDetailsThe addition of worldchain to the networkDetails object is consistent with other chain configurations. It correctly uses the values from the previously defined configByChainId, ensuring consistency across the configuration.
services/explorer/consumer/parser/tokendata/cache.go (1)
290-295
: Summary: New token entries added successfullyThe changes in this file are limited to adding new token entries for WLD, USDC.e, and WETH to the
tokenDataMap
. These additions expand the available token data without modifying any existing functionality. The new entries follow the established pattern and are consistent with the rest of the map.Key points:
- WLD token added for chain IDs 480, 10, and 1
- USDC.e token added for chain ID 480
- WETH token added for chain ID 480
These changes enhance the token data available within the service, ensuring that the cache can accommodate these new tokens, which are not part of the existing bridge configuration.
contrib/opbot/botmd/commands.go (3)
23-23
: Import of internal package approvedThe addition of the internal package import is a good practice for encapsulation and code organization.
243-243
: Simplified quote request fetchingThe use of
b.rfqClient.GetRFQ
streamlines the process of fetching the quote request, improving code clarity and maintainability.
278-287
: Improved refund submission processThe updated refund submission process using
b.submitter.SubmitTransaction
appears more robust and flexible. It provides better control over the transaction submission and error handling.services/rfq/relayer/quoter/quoter_test.go (2)
261-284
: Excellent addition of new test cases!The new test cases for
TestGetOriginAmount
significantly improve the coverage of different scenarios. They test various combinations ofQuotePct
,QuoteOffset
, andMinQuoteAmount
, ensuring that theGetOriginAmount
function behaves correctly under different conditions. This enhancement will help maintain the reliability of the quoting mechanism.
Line range hint
385-486
: Comprehensive test coverage for GetDestAmountAlthough there are no new changes in the
TestGetDestAmount
function, it's worth noting that the existing test cases provide thorough coverage. They test various scenarios with differentQuoteWidthBps
andQuoteOffsetBps
values, including edge cases. This comprehensive set of tests helps ensure the reliability of theGetDestAmount
function.services/explorer/graphql/server/graph/model/models_gen.go (1)
84-108
: LGTM! New field added consistently.The addition of the
Worldchain
field to theDateResultByChain
struct is consistent with the existing pattern and follows proper Go conventions. The field is correctly defined as a pointer tofloat64
with appropriate JSON tags.To ensure the change is fully integrated, please run the following script to check for any other occurrences of
DateResultByChain
that might need updating:This script will help identify any areas in the codebase where the
DateResultByChain
struct is used, allowing you to verify if those areas need to be updated to accommodate the newWorldchain
field.✅ Verification successful
Verification Successful
The addition of the
Worldchain
field has been properly handled across all relevant usages ofDateResultByChain
. No issues were found during verification.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find usages of DateResultByChain struct that might need updating # Search for DateResultByChain usage echo "Searching for DateResultByChain usage:" rg --type go "DateResultByChain" -C 3 # Search for specific operations on DateResultByChain that might need to include Worldchain echo "\nSearching for operations on DateResultByChain fields:" rg --type go "DateResultByChain.*\.(Ethereum|Optimism|Cronos|Bsc|Polygon|Fantom|Boba|Metis|Moonbeam|Moonriver|Klaytn|Arbitrum|Avalanche|Dfk|Aurora|Harmony|Canto|Dogechain|Base|Blast|Scroll|Linea)" -C 3Length of output: 100194
services/explorer/graphql/client/client.go (1)
506-506
: LGTM! The GraphQL query has been correctly updated.The addition of the
worldchain
field to theGetDailyStatisticsByChainDocument
query is correct and consistent with the struct changes. The placement and formatting of the new field align well with the existing structure of the query.services/explorer/graphql/server/graph/resolver/server.go (4)
664-670
: LGTM! Consistent implementation of complexity calculation.The addition of the complexity calculation for the
Worldchain
field is well-implemented and consistent with the existing code structure. The null check before calling the complexity function is a good practice to prevent potential nil pointer dereferences.
1675-1675
: LGTM! Consistent schema update.The addition of the
worldchain
field of typeFloat
to the GraphQL schema is consistent with the existing structure and maintains the alphabetical order of fields. TheFloat
type is appropriate for representing transaction volume data.
7243-7244
: LGTM! Consistent field context handling.The addition of the case for
worldchain
in the field context creation switch statement is consistent with the existing code structure and maintains the alphabetical order of fields.
10489-10490
: LGTM! Consistent field resolution handling.The addition of the case for
worldchain
in the field resolution loop is consistent with the existing code structure and maintains the alphabetical order of fields.
@@ -40,3 +40,4 @@ usdbc: 'usd-coin' | |||
crvusd: 'usd-coin' | |||
usdb: 'usdb' | |||
spec: 'spectral' | |||
wld: 'worldcoin' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Inconsistency Found with CoinGecko ID for WLD Token
The CoinGecko ID 'worldcoin' has the symbol 'wdc', but the mapping is 'wld: "worldcoin"'. Please verify and update the token symbol to ensure accurate association.
🔗 Analysis chain
LGTM! Verify CoinGecko ID for accuracy.
The new entry for the WLD token is correctly formatted and consistent with the existing entries in the file. However, it's important to ensure that 'worldcoin' is indeed the correct CoinGecko ID for the WLD token.
To verify the CoinGecko ID, you can use the CoinGecko API or check the CoinGecko website directly. Here's a shell script to verify using the API:
The expected output should show "wld" as the symbol and "worldcoin" as the id. If it doesn't match, please update the entry accordingly.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify the CoinGecko ID for WLD token
curl -s "https://api.coingecko.com/api/v3/coins/worldcoin" | jq '.symbol, .id'
Length of output: 96
@@ -17,6 +17,7 @@ export const qDeposits = () => { | |||
'BridgeRequestEvents.originAmountFormatted', | |||
'BridgeRequestEvents.destAmountFormatted', | |||
'BridgeRequestEvents.sender', | |||
'BridgeRequestEvents.request', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
'BridgeRequestEvents.request' is added but not used
The 'BridgeRequestEvents.request'
field has been added to the select statement in resolvers.ts
, but there are no corresponding usages where it is processed or utilized. This might indicate an unnecessary addition or missing implementation related to handling the 'request'
field.
🔗 Analysis chain
New field added to the query selection.
The addition of 'BridgeRequestEvents.request' to the select statement expands the data retrieved by the query. This change may affect the structure of the results returned by the qDeposits
function.
To ensure this change is consistent with how the data is used elsewhere in the codebase, let's verify the usage of the qDeposits
function:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check usage of qDeposits function and look for any processing of the 'request' field
# Search for imports of qDeposits
echo "Imports of qDeposits:"
rg --type typescript "import.*qDeposits" -g '!**/depositsQueries.ts'
# Search for uses of qDeposits
echo "\nUses of qDeposits:"
rg --type typescript "qDeposits\(\)" -g '!**/depositsQueries.ts'
# Search for processing of 'request' field from BridgeRequestEvents
echo "\nProcessing of 'request' field:"
rg --type typescript "BridgeRequestEvents.*request" -g '!**/depositsQueries.ts'
Length of output: 509
Script:
#!/bin/bash
# Description: Check usage of qDeposits function and look for any processing of the 'request' field
# Search for imports of qDeposits
echo "Imports of qDeposits:"
rg "import.*qDeposits" -g '*.ts' -g '*.tsx' -g '!**/depositsQueries.ts'
# Search for uses of qDeposits
echo "\nUses of qDeposits:"
rg "qDeposits\(\)" -g '*.ts' -g '*.tsx' -g '!**/depositsQueries.ts'
# Search for processing of 'request' field from BridgeRequestEvents
echo "\nProcessing of 'request' field:"
rg "BridgeRequestEvents.*request" -g '*.ts' -g '*.tsx' -g '!**/depositsQueries.ts'
Length of output: 2396
Script:
#!/bin/bash
# Description: Verify usage of 'BridgeRequestEvents.request' in resolvers.ts
# Search for usages of 'BridgeRequestEvents.request' in resolvers.ts
echo "'BridgeRequestEvents.request' usage in resolvers.ts:"
rg "BridgeRequestEvents\.request" packages/rfq-indexer/api/src/graphql/resolvers.ts
Length of output: 243
480: { | ||
name: 'Worldchain', | ||
FastBridgeV2: { | ||
address: '0x5523D3c98809DdDB82C686E152F5C58B1B0fB59E', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
The FastBridgeV2 address is reused across multiple networks, including Worldchain. Please verify that this duplication is intentional and correctly configured.
🔗 Analysis chain
Verify the FastBridgeV2 address for Worldchain.
The FastBridgeV2 address (0x5523D3c98809DdDB82C686E152F5C58B1B0fB59E) is the same as several other networks (Ethereum, Arbitrum, Optimism, Base, and Scroll). Please confirm that this address is correct for Worldchain.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if the FastBridgeV2 address for Worldchain is unique or shared with other networks
echo "Networks sharing the FastBridgeV2 address 0x5523D3c98809DdDB82C686E152F5C58B1B0fB59E:"
grep -n "0x5523D3c98809DdDB82C686E152F5C58B1B0fB59E" packages/rfq-indexer/api/src/constants/networkConfig.ts
Length of output: 413
Script:
#!/bin/bash
# Description: Verify if the FastBridgeV2 address for Worldchain is unique across the codebase
echo "Searching for FastBridgeV2 address 0x5523D3c98809DdDB82C686E152F5C58B1B0fB59E in the codebase:"
grep -rn "0x5523D3c98809DdDB82C686E152F5C58B1B0fB59E" packages/rfq-indexer/api/src/constants/networkConfig.ts
Length of output: 693
Script:
#!/bin/bash
# Description: Extract context around each FastBridgeV2 address occurrence to identify associated networks
echo "Retrieving context around FastBridgeV2 address 0x5523D3c98809DdDB82C686E152F5C58B1B0fB59E:"
grep -C 3 "0x5523D3c98809DdDB82C686E152F5C58B1B0fB59E" packages/rfq-indexer/api/src/constants/networkConfig.ts
Length of output: 1395
@@ -10,6 +10,7 @@ export interface BridgeRequestEvents { | |||
originChainId: ColumnType<number> | |||
originChain: ColumnType<string> | |||
sender: ColumnType<string> | |||
request: ColumnType<string> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Action Required: Update Existing Usages of BridgeRequestEvents
The addition of the request
property to the BridgeRequestEvents
interface necessitates updating all existing instances where this interface is used. Please ensure that each usage includes the new request
property to prevent potential bugs or runtime errors.
🔗 Analysis chain
Approved: New request
property added to BridgeRequestEvents
The addition of the request
property to the BridgeRequestEvents
interface is logical and consistent with the existing structure. It likely represents the actual request data associated with a bridge event.
Consider adding a brief comment to explain the purpose and content of the request
property for better documentation.
Please verify if any existing code that uses the BridgeRequestEvents
interface needs to be updated to accommodate this new property. Run the following script to find potential usage:
@@ -180,6 +180,7 @@ type DateResultByChain { | |||
blast: Float | |||
scroll: Float | |||
linea: Float | |||
worldchain: Float |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Action Required: Backend Integration for worldchain
Field Incomplete
While the worldchain: Float
field has been successfully added to the GraphQL schema and is present in client queries, the backend implementation does not currently handle this new field. Specifically:
-
Resolvers Missing
worldchain
:- No occurrences of
worldchain
found in resolver implementation files (*resolvers.go
).
- No occurrences of
-
Data Fetching Logic Lacks
worldchain
:worldchain
is not present in data fetching functions within*reader.go
or*service.go
files.
-
Incomplete Test Coverage:
worldchain
appears only in test files (server_test.go
), indicating tests exist but the actual implementation does not.
To ensure the new field is fully operational, the backend resolvers and data fetching logic must be updated to include and handle the worldchain
field appropriately.
🔗 Analysis chain
LGTM: New field added for worldchain statistics.
The addition of the worldchain: Float
field to the DateResultByChain
type is consistent with the existing structure and expands the schema to include metrics for a new blockchain network. This change aligns with the schema's current design and naming conventions.
To ensure this change is fully implemented, please verify the following:
- The resolver for
DateResultByChain
has been updated to include theworldchain
field. - Any data fetching or processing logic related to
DateResultByChain
now accounts for theworldchain
data. - If there are any GraphQL queries or mutations using
DateResultByChain
, they have been updated to include the new field where necessary.
You can use the following script to check for related changes:
This will help ensure that the new field is properly integrated into the system.
Further Verification Needed: Adjusting Shell Scripts to Search GraphQL Queries
The previous shell script failed to search GraphQL queries due to an unrecognized file type. Please use the following revised shell script to properly search for DateResultByChain
and worldchain
in GraphQL query files:
This script searches for the DateResultByChain
type and the worldchain
field within GraphQL files, ensuring that all relevant GraphQL queries are examined.
Further Verification Needed: Ensure Backend Integration of worldchain
Field
To confirm that the worldchain
field is fully integrated into the backend, please run the following shell scripts. These scripts will check whether the worldchain
field is handled in resolver functions and data fetching logic.
This will help ensure that the worldchain
field is properly managed in the backend components, including resolvers, data fetching logic, and test cases.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for related changes to the new worldchain field
# Search for DateResultByChain in Go files
echo "Searching for DateResultByChain in Go files:"
rg --type go "DateResultByChain" -C 5
# Search for worldchain in Go files
echo "Searching for worldchain in Go files:"
rg --type go "worldchain" -C 5
# Search for GraphQL queries using DateResultByChain
echo "Searching for GraphQL queries using DateResultByChain:"
rg --type graphql "DateResultByChain" -C 5
Length of output: 132163
Script:
#!/bin/bash
# Description: Check for related changes to the new worldchain field in GraphQL queries
# Search for DateResultByChain in GraphQL files
echo "Searching for DateResultByChain in GraphQL files:"
rg "DateResultByChain" -g "*.graphql" -g "*.gql" -C 5
# Search for worldchain in GraphQL files
echo "Searching for worldchain in GraphQL files:"
rg "worldchain" -g "*.graphql" -g "*.gql" -C 5
Length of output: 3793
Script:
#!/bin/bash
# Description: Verify backend integration of the worldchain field
# Search for worldchain in resolver implementations
echo "Searching for worldchain in resolver implementations:"
rg "worldchain" -g "*resolvers.go" -C 5
# Search for worldchain in data fetching logic
echo "Searching for worldchain in data fetching functions:"
rg "worldchain" -g "*reader.go" -g "*service.go" -C 5
# Search for worldchain in tests to ensure it's covered
echo "Searching for worldchain in test files:"
rg "worldchain" -g "*_test.go" -C 5
Length of output: 1321
} else if strings.EqualFold(tokenAddressStr, "0x2cFc85d8E48F8EAB294be644d9E25C3030863003") || strings.EqualFold(tokenAddressStr, "0xdC6fF44d5d932Cbd77B52E5612Ba0529DC6226F1") { | ||
rfqEvent.TokenSymbol = "WLD" | ||
rfqEvent.TokenDecimal = new(uint8) | ||
*rfqEvent.TokenDecimal = 18 | ||
curCoinGeckoID = "worldchain" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider refactoring token identification logic for better maintainability
While the addition of support for the "WLD" token is a good improvement, there are several aspects that could be enhanced:
-
Hardcoded addresses: Consider moving the token addresses to constants or a configuration file. This will make it easier to maintain and update in the future.
-
Assumption for other tokens: The function assumes any token that isn't ETH or WLD is USDC. This might lead to incorrect handling of other tokens. Consider implementing a more robust token identification mechanism.
-
Error handling: There's no explicit error handling for unknown tokens. Consider adding a default case that logs a warning or returns an error for unrecognized token addresses.
-
Code structure: The current if-else structure could become unwieldy as more tokens are added. Consider using a map or switch statement for better readability and maintainability.
Here's a suggested refactoring to address these points:
var tokenConfig = map[string]struct {
symbol string
decimal uint8
geckoID string
}{
"0xeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee": {"ETH", 18, "ethereum"},
"0x2cFc85d8E48F8EAB294be644d9E25C3030863003": {"WLD", 18, "worldchain"},
"0xdC6fF44d5d932Cbd77B52E5612Ba0529DC6226F1": {"WLD", 18, "worldchain"},
// Add other known tokens here
}
func (p *RFQParser) MatureLogs(ctx context.Context, rfqEvent *model.RFQEvent, iFace rfqTypes.EventLog, chainID uint32) (interface{}, error) {
// ... (previous code remains unchanged)
tokenAddressStr := strings.ToLower(common.HexToAddress(rfqEvent.OriginToken).Hex())
if tokenInfo, known := tokenConfig[tokenAddressStr]; known {
rfqEvent.TokenSymbol = tokenInfo.symbol
rfqEvent.TokenDecimal = &tokenInfo.decimal
curCoinGeckoID = tokenInfo.geckoID
} else {
// Log a warning for unknown token
logger.Warnf("Unknown token address: %s", tokenAddressStr)
// Default to USDC or handle unknown tokens as needed
rfqEvent.TokenSymbol = "USDC"
decimal := uint8(6)
rfqEvent.TokenDecimal = &decimal
curCoinGeckoID = usdcCoinGeckoID
}
// ... (rest of the function remains unchanged)
}
This refactoring improves maintainability, makes it easier to add new tokens, and provides better handling for unknown tokens.
if status == "Requested" { | ||
objects = append(objects, &slack.TextBlockObject{ | ||
Type: slack.MarkdownType, | ||
Text: "*DestTxHash*: not available", | ||
}) | ||
} else { | ||
//nolint: gosec | ||
objects = append(objects, &slack.TextBlockObject{ | ||
Type: slack.MarkdownType, | ||
Text: fmt.Sprintf("*DestTxHash*: %s", toTXSlackLink(res.BridgeRelay.TransactionHash, uint32(res.Bridge.DestChainID))), | ||
}) | ||
} | ||
|
||
_, err := ctx.Response().ReplyBlocks(slackBlocks, slacker.WithUnfurlLinks(false)) | ||
slackBlocks = append(slackBlocks, slack.NewSectionBlock(nil, objects, nil)) | ||
|
||
_, err = ctx.Response().ReplyBlocks(slackBlocks, slacker.WithUnfurlLinks(false)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Status-based message customization with DRY opportunity
The conditional logic for handling different RFQ statuses improves the user experience by providing relevant information. However, there are two areas for improvement:
- DRY Principle: The code for adding TextBlockObjects is repeated. Consider refactoring to a helper function:
func addTextBlockObject(objects []*slack.TextBlockObject, text string) []*slack.TextBlockObject {
return append(objects, &slack.TextBlockObject{
Type: slack.MarkdownType,
Text: text,
})
}
- Integer Overflow: There's another potential integer overflow when converting
res.Bridge.DestChainID
touint32
on line 212. Apply the same safe casting solution as suggested for the OriginChainID.
if strings.HasPrefix(rfqHash, "0X") { | ||
rfqHash = strings.ToLower(rfqHash[2:]) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Suggestion: Improve hash prefix handling in toExplorerSlackLink
The current implementation only handles the uppercase "0X" prefix. To ensure consistent behavior with different hash formats, consider updating the condition to handle both uppercase and lowercase prefixes.
Apply this diff to address the issue:
-if strings.HasPrefix(rfqHash, "0X") {
+if strings.HasPrefix(rfqHash, "0X") || strings.HasPrefix(rfqHash, "0x") {
rfqHash = strings.ToLower(rfqHash[2:])
}
This change will ensure that both "0X" and "0x" prefixes are handled consistently.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if strings.HasPrefix(rfqHash, "0X") { | |
rfqHash = strings.ToLower(rfqHash[2:]) | |
} | |
if strings.HasPrefix(rfqHash, "0X") || strings.HasPrefix(rfqHash, "0x") { | |
rfqHash = strings.ToLower(rfqHash[2:]) | |
} | |
objects := []*slack.TextBlockObject{ | ||
{ | ||
Type: slack.MarkdownType, | ||
Text: fmt.Sprintf("*Relayer*: %s", res.BridgeRelay.Relayer), | ||
}, | ||
{ | ||
Type: slack.MarkdownType, | ||
Text: fmt.Sprintf("*Status*: %s", status), | ||
}, | ||
{ | ||
Type: slack.MarkdownType, | ||
Text: fmt.Sprintf("*TxID*: %s", toExplorerSlackLink(res.Bridge.TransactionID)), | ||
}, | ||
{ | ||
Type: slack.MarkdownType, | ||
//nolint: gosec | ||
Text: fmt.Sprintf("*OriginTxHash*: %s", toTXSlackLink(res.BridgeRequest.TransactionHash, uint32(res.Bridge.OriginChainID))), | ||
}, | ||
{ | ||
Type: slack.MarkdownType, | ||
Text: fmt.Sprintf("*Estimated Tx Age*: %s", humanize.Time(time.Unix(res.BridgeRelay.BlockTimestamp, 0))), | ||
}, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improved Slack message construction with potential overflow risk
The new approach to constructing Slack messages using TextBlockObjects improves readability and maintainability. However, there's a potential integer overflow risk when converting res.Bridge.OriginChainID
to uint32
on line 195.
Consider using a safer conversion method or ensuring that the chain ID is within the valid range for uint32
:
- Text: fmt.Sprintf("*OriginTxHash*: %s", toTXSlackLink(res.BridgeRequest.TransactionHash, uint32(res.Bridge.OriginChainID))),
+ Text: fmt.Sprintf("*OriginTxHash*: %s", toTXSlackLink(res.BridgeRequest.TransactionHash, safeCastToUint32(res.Bridge.OriginChainID))),
You'll need to implement a safeCastToUint32
function that handles potential overflows.
Committable suggestion was skipped due to low confidence.
@@ -20,6 +20,7 @@ const qDeposits = () => { | |||
'BridgeRequestEvents.originAmountFormatted', | |||
'BridgeRequestEvents.destAmountFormatted', | |||
'BridgeRequestEvents.sender', | |||
'BridgeRequestEvents.request', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Issue Found: 'request' Field Not Handled in nest_results
Function
The new field 'BridgeRequestEvents.request'
has been added to the qDeposits
function but is not processed within the nest_results
function. This omission may lead to incomplete data handling or potential runtime errors.
Please address the following:
- Update the
nest_results
Function: Ensure that thenest_results
function properly handles the'request'
field by incorporating it into the data processing logic. - Verify Data Nesting: Confirm that the
'request'
data is correctly nested and utilized within the resolver to maintain data integrity. - Review Related Functionality: Check other areas of the codebase that interact with
qDeposits
andnest_results
to ensure that the addition does not introduce inconsistencies or affect existing functionalities.
🔗 Analysis chain
Approved: New field added to qDeposits
function
The addition of 'BridgeRequestEvents.request' to the qDeposits
function is approved. This change will provide more comprehensive data about bridge request events.
Consider the following points:
-
Verify that the
nest_results
function properly handles this new field. It may need to be updated to include the 'request' field in the appropriate nested object. -
Update any relevant GraphQL schema definitions or documentation to reflect this new field.
-
Be aware of potential performance implications. If the 'request' field contains a large amount of data, it might impact query execution time and response size. Consider monitoring query performance after this change.
To ensure the new field is properly handled, please run the following verification script:
To ensure that the 'request' field is properly handled in the nest_results
function, please run the following verification script:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the new 'request' field is properly handled in the nest_results function
# Search for the nest_results function definition
echo "Searching for nest_results function:"
sed -n '/const nest_results/,/}/p' packages/rfq-indexer/api/src/graphql/resolvers.ts
# Search for usage of the 'request' field within the nest_results function
echo "\nSearching for 'request' field usage in nest_results:"
sed -n '/const nest_results/,/}/p' packages/rfq-indexer/api/src/graphql/resolvers.ts | grep 'request'
# If the 'request' field is not found in the nest_results function, it might need to be added
Length of output: 543
Script:
#!/bin/bash
# Description: Verify all usages of the 'request' field in resolvers.ts
echo "Searching for all occurrences of 'request' in resolvers.ts:"
rg 'request' packages/rfq-indexer/api/src/graphql/resolvers.ts
echo "\nChecking if 'BridgeRequestEvents.request' is processed in the qDeposits function:"
sed -n '/const qDeposits/,/}/p' packages/rfq-indexer/api/src/graphql/resolvers.ts | rg 'BridgeRequestEvents.request'
Length of output: 571
Description
A clear and concise description of the features you're adding in this pull request.
Additional context
Add any other context about the problem you're solving.
Metadata
Summary by CodeRabbit
02ea573: explorer-ui preview link
7f43b20: explorer-ui preview link
5e94f50: explorer-ui preview link
385000a: explorer-ui preview link
e111b9e: explorer-ui preview link