-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assertion failed: received sequence number doesn't match request sequence number #313
Comments
I asked a similar question yesterday and this was Sabee's answer:
Hope it helps. |
Hi, I found I use StartOfStream to consuming event hubs in my application, so this error should be thrown sometimes, this is not an issue. |
I am getting the same error. Same code works fine in 2.3.1 release. |
Hey @tilumi - do you have any repro steps by chance? I'm not able to reproduce this issue (unless it's the case mentioned earlier which is expected). |
Just did the test, still getting the same exception. |
@ytaous can you share a simple repro of the issue with me? |
Hey, if any of you guys can share a repro, please do! The best place to share it is the gitter. |
I'm having the same issue. It is indeed the case that the consumer has stopped for a longer period than the retention of the Event Hub. It has been added recently with the cached receivers: https://github.com/Azure/azure-event-hubs-spark/pull/303/files/31021d29a67630951ffa9edefb1386ad44b2f0b1 |
In the getPartitions method of the EventHubsRDD class we check if the offsets are still valid. It is possible that the retention has kicked in and the messages are no longer available on the bus. For more info, refer to this issue: Azure#313 Did some minor refactoring: - Made the clientFactory static so we don't need to pass this constructor around - Changed the signature of allBoundedSeqNos from a Seq to a Map, since the partitionId is unique and later on in the code it is also converted to a map. - Removed the trim method, since passing EventHub config keys to Spark does not do any harm. Without this change, the tests are failing since they are not being switched to the simulator.
* Dont read messages that are already pruned by EventHub In the getPartitions method of the EventHubsRDD class we check if the offsets are still valid. It is possible that the retention has kicked in and the messages are no longer available on the bus. For more info, refer to this issue: #313 Did some minor refactoring: - Made the clientFactory static so we don't need to pass this constructor around - Changed the signature of allBoundedSeqNos from a Seq to a Map, since the partitionId is unique and later on in the code it is also converted to a map. - Removed the trim method, since passing EventHub config keys to Spark does not do any harm. Without this change, the tests are failing since they are not being switched to the simulator. * Remove the offset calculation * Bump version to 2.3.3-SNAPSHOT * Restore trimmed config when creating a EventHubsRDD
Check issue against this version: It is still happening! Log is here: Driver stacktrace: |
Hey @ogidogi - it's true that the message printing out is similar but the code is all quite different. But, yea, this is the only bug I've found in the current release - I'm fixing it now! |
What is a release plan for v2.3.3, with this changes? |
@ogidogi the plan is to stress test release candidates and fix bugs as they arise. Once that QA process is over, there will be a new release! |
This issue should be reopened. I'm currently facing it using the 2.3.2 release and also the current 2.3.3 snapshot (while reading eventhub messages from the end of the stream with a 10-minute watermark threshold on my aggregation process). |
I am currently facing this issue using the 2.3.2 release. I am consuming from EventPosition.fromStartOfStream with a watermark of 600 seconds. This is happening only in 1 of my ingestion pipeline. My other pipeline which is reading from a different eventhubs always works fine. Any recommendation on why this is happening and when would this be fixed? |
Hey! This issue probably easy to find, but another issue has been opened and closed about this for the 2.3.2 release. It does still happen in the current |
Commit: 334255e [334255e] Version from this commit works for me on 10 streaming apps for more than a week under AKS. |
Thanks @ogidogi - I have some slight changes (nothing major) that'll help enforce no data loss. And, there's a bug in the Java client that needs to be fixed...besides that my streams are running great too! |
Do you have any solution to avoid the problem? |
@shuitai and @belgacea - these issues aren't going to be reopened. There're a couple things that can cause this assertion failure to happen. I've addressed them in #384 and the changes will be available in |
I tried the solution is working for me. Using Eventhub or IotHub as entrance.
Thank you for the release. |
I see a similar issue with the same error, but only when i do spark submit without the default settings. SBT Entry: |
Hi, I use 2.3.2-SNAPSHOT version, running a spark applications shows following error,
I am using namespace: "wascl-nam" ,name: "wasclmonitors" Event Hub.
The text was updated successfully, but these errors were encountered: