Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ZHA] failed to create backup - timeout #117859

Closed
liouma opened this issue May 21, 2024 · 5 comments · Fixed by #118658
Closed

[ZHA] failed to create backup - timeout #117859

liouma opened this issue May 21, 2024 · 5 comments · Fixed by #118658

Comments

@liouma
Copy link

liouma commented May 21, 2024

The problem

ZHA backup returns "timeout" error.
On "devices & services", "ZHA", "configure", when I click on "download backup", I get the following error : "failed to create backup - timeout"

I have 120 zigbee devices, my skyconnect dongle is not in a central place in the house that's why I'm changing for a SLZB-06M. This ethernet (POE) adaptor will be placed in a central place of the house.
That's why I want to "migrate radio" but I'm stuck in the first step: the backup doesn't work.

What version of Home Assistant Core has the issue?

core-2024.5.4

What was the last working version of Home Assistant Core?

No response

What type of installation are you running?

Home Assistant OS

Integration causing the issue

ZHA

Link to integration documentation on our website

https://www.home-assistant.io/integrations/zha/

Diagnostics information

home-assistant_zha_2024-05-21T12-15-25.914Z.log

Example YAML snippet

No response

Anything in the logs that might be useful for us?

I don't understand most of the logs so I uploaded the whole sequence. However this may be related:

2024-05-21 16:15:16.938 DEBUG (MainThread) [bellows.ezsp.protocol] Application frame received incomingMessageHandler: [<EmberIncomingMessageType.INCOMING_UNICAST: 0>, EmberApsFrame(profileId=260, clusterId=2820, sourceEndpoint=1, destinationEndpoint=1, options=<EmberApsOption.APS_OPTION_RETRY|APS_OPTION_ENABLE_ROUTE_DISCOVERY: 320>, groupId=0, sequence=79), 136, -66, 0x32F4, 255, 255, b'\x18r\n\x05\x05!\xf2\x00']
2024-05-21 16:15:16.939 DEBUG (MainThread) [bellows.zigbee.application] Received incomingMessageHandler frame with [<EmberIncomingMessageType.INCOMING_UNICAST: 0>, EmberApsFrame(profileId=260, clusterId=2820, sourceEndpoint=1, destinationEndpoint=1, options=<EmberApsOption.APS_OPTION_RETRY|APS_OPTION_ENABLE_ROUTE_DISCOVERY: 320>, groupId=0, sequence=79), 136, -66, 0x32F4, 255, 255, b'\x18r\n\x05\x05!\xf2\x00']

And also that:

2024-05-21 16:14:37.163 WARNING (MainThread) [bellows.ezsp.protocol] Failed to parse frame getKeyTableEntry: b'03eb9f030100000043690020517600005b0000009116002032'
2024-05-21 16:14:37.163 WARNING (MainThread) [bellows.ezsp] Failed to parse frame, ignoring
2024-05-21 16:14:37.230 WARNING (MainThread) [bellows.ezsp] Failed to parse frame, ignoring

Additional information

Many other persons report similar issue here:
https://community.home-assistant.io/t/recent-timeout-error-on-backup-job/729289/4

@home-assistant
Copy link

Hey there @dmulcahey, @Adminiuga, @puddly, @TheJulianJES, mind taking a look at this issue as it has been labeled with an integration (zha) you are listed as a code owner for? Thanks!

Code owner commands

Code owners of zha can trigger bot actions by commenting:

  • @home-assistant close Closes the issue.
  • @home-assistant rename Awesome new title Renames the issue.
  • @home-assistant reopen Reopen the issue.
  • @home-assistant unassign zha Removes the current integration label and assignees on the issue, add the integration domain after the command.
  • @home-assistant add-label needs-more-information Add a label (needs-more-information, problem in dependency, problem in custom component) to the issue.
  • @home-assistant remove-label needs-more-information Remove a label (needs-more-information, problem in dependency, problem in custom component) on the issue.

(message by CodeOwnersMention)


zha documentation
zha source
(message by IssueLinks)

@liouma
Copy link
Author

liouma commented May 23, 2024

OK I finally gave up making a backup and sucessfully migrated to the new adapter.
With this new adapter I don't have this issue.
So the problem is not solve but I won't be able to reproduce it. I close the issue

@alexdelprete
Copy link

alexdelprete commented May 26, 2024

@puddly I have the same issue: pinned it down to ZHA because general HA backups were failing with a timeout error, troubleshooting the issue I found out ZHA backups were failing:

image

In logs, for HA backups I only see this:

2024-05-26 12:20:00.809 INFO (MainThread) [homeassistant.components.recorder.backup] Backup start notification, locking database for writes
2024-05-26 12:20:07.089 INFO (MainThread) [homeassistant.components.recorder.backup] Backup end notification, releasing write lock

Enabled debug for bellows and zigpy, filtered logs for zigpy.backups. At 12:20:00 when HA backup started I see nothing in zigpy/bellows debug logs.:

2024-05-26 12:18:50.316 DEBUG (MainThread) [zigpy.backups] Adding a new backup NetworkBackup(version=1, backup_time=datetime.datetime(2023, 11, 30, 4, 20, 56, 672744, tzinfo=datetime.timezone.utc), network_info=NetworkInfo(extended_pan_id=xx:xx:xx:xx:xx:xx:xx:xx, pan_id=0xXXXX, nwk_update_id=0, nwk_manager_id=0x0000, channel=15, channel_mask=<Channels.ALL_CHANNELS: 134215680>, security_level=5, network_key=Key(key=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx, tx_counter=25921687, rx_counter=0, seq=0, partner_ieee=ff:ff:ff:ff:ff:ff:ff:ff), tc_link_key=Key(key=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx, tx_counter=692224, rx_counter=0, seq=0, partner_ieee=00:0d:6f:00:0a:ff:73:23), key_table=[], children=[], nwk_addresses={}, stack_specific={'ezsp': {'hashed_tclk': '18a6ded1cc62ab3890d134ee14ae4374'}}, metadata={'ezsp': {'manufacturer': '', 'board': '', 'version': '7.2.1.0 build 144', 'stack_version': 10, 'can_burn_userdata_custom_eui64': False, 'can_rewrite_custom_eui64': False}}, source='bellows@0.36.8'), node_info=NodeInfo(nwk=0x0000, ieee=00:0d:6f:00:0a:ff:73:23, logical_type=<LogicalType.Coordinator: 0>, model=None, manufacturer=None, version=None))
2024-05-26 12:18:50.316 DEBUG (MainThread) [zigpy.backups] Adding a new backup NetworkBackup(version=1, backup_time=datetime.datetime(2023, 11, 30, 4, 21, 17, 661613, tzinfo=datetime.timezone.utc), network_info=NetworkInfo(extended_pan_id=xx:xx:xx:xx:xx:xx:xx:xx, pan_id=0xXXXX, nwk_update_id=0, nwk_manager_id=0x0000, channel=15, channel_mask=<Channels.ALL_CHANNELS: 134215680>, security_level=5, network_key=Key(key=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx, tx_counter=25919493, rx_counter=0, seq=0, partner_ieee=ff:ff:ff:ff:ff:ff:ff:ff), tc_link_key=Key(key=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx, tx_counter=692224, rx_counter=0, seq=0, partner_ieee=00:0d:6f:00:0a:ff:73:23), key_table=[], children=[], nwk_addresses={}, stack_specific={'ezsp': {'hashed_tclk': '18a6ded1cc62ab3890d134ee14ae4374'}}, metadata={'ezsp': {'manufacturer': '', 'board': '', 'version': '7.2.1.0 build 144', 'stack_version': 10, 'can_burn_userdata_custom_eui64': False, 'can_rewrite_custom_eui64': False}}, source='bellows@0.36.8'), node_info=NodeInfo(nwk=0x0000, ieee=00:0d:6f:00:0a:ff:73:23, logical_type=<LogicalType.Coordinator: 0>, model=None, manufacturer=None, version=None))
2024-05-26 12:18:50.316 DEBUG (MainThread) [zigpy.backups] Adding a new backup NetworkBackup(version=1, backup_time=datetime.datetime(2023, 12, 6, 10, 11, 2, 375196, tzinfo=datetime.timezone.utc), network_info=NetworkInfo(extended_pan_id=xx:xx:xx:xx:xx:xx:xx:xx, pan_id=0xXXXX, nwk_update_id=1, nwk_manager_id=0x0000, channel=20, channel_mask=<Channels.ALL_CHANNELS: 134215680>, security_level=5, network_key=Key(key=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx, tx_counter=26107908, rx_counter=0, seq=0, partner_ieee=ff:ff:ff:ff:ff:ff:ff:ff), tc_link_key=Key(key=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx, tx_counter=700416, rx_counter=0, seq=0, partner_ieee=00:0d:6f:00:0a:ff:73:23), key_table=[], children=[], nwk_addresses={}, stack_specific={'ezsp': {'hashed_tclk': '18a6ded1cc62ab3890d134ee14ae4374'}}, metadata={'ezsp': {'manufacturer': '', 'board': '', 'version': '7.2.1.0 build 144', 'stack_version': 10, 'can_burn_userdata_custom_eui64': False, 'can_rewrite_custom_eui64': False}}, source='bellows@0.36.8'), node_info=NodeInfo(nwk=0x0000, ieee=00:0d:6f:00:0a:ff:73:23, logical_type=<LogicalType.Coordinator: 0>, model=None, manufacturer=None, version=None))
2024-05-26 12:18:50.316 DEBUG (MainThread) [zigpy.backups] Adding a new backup NetworkBackup(version=1, backup_time=datetime.datetime(2024, 5, 25, 22, 29, 26, 801211, tzinfo=datetime.timezone.utc), network_info=NetworkInfo(extended_pan_id=xx:xx:xx:xx:xx:xx:xx:xx, pan_id=0xXXXX, nwk_update_id=1, nwk_manager_id=0x0000, channel=20, channel_mask=<Channels.ALL_CHANNELS: 134215680>, security_level=5, network_key=Key(key=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx, tx_counter=31907854, rx_counter=0, seq=0, partner_ieee=ff:ff:ff:ff:ff:ff:ff:ff), tc_link_key=Key(key=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx, tx_counter=729088, rx_counter=0, seq=0, partner_ieee=00:0d:6f:00:0a:ff:73:23), key_table=[], children=[], nwk_addresses={}, stack_specific={'ezsp': {'hashed_tclk': '18a6ded1cc62ab3890d134ee14ae4374'}}, metadata={'ezsp': {'stack_version': 10, 'can_burn_userdata_custom_eui64': False, 'can_rewrite_custom_eui64': False}}, source='bellows@0.38.4'), node_info=NodeInfo(nwk=0x0000, ieee=00:0d:6f:00:0a:ff:73:23, logical_type=<LogicalType.Coordinator: 0>, model=None, manufacturer=None, version='7.2.1.0 build 144'))
2024-05-26 12:19:00.531 DEBUG (MainThread) [zigpy.backups] Adding a new backup NetworkBackup(version=1, backup_time=datetime.datetime(2024, 5, 26, 10, 19, 0, 531444, tzinfo=datetime.timezone.utc), network_info=NetworkInfo(extended_pan_id=xx:xx:xx:xx:xx:xx:xx:xx, pan_id=0xXXXX, nwk_update_id=1, nwk_manager_id=0x0000, channel=20, channel_mask=<Channels.ALL_CHANNELS: 134215680>, security_level=5, network_key=Key(key=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx, tx_counter=31936521, rx_counter=0, seq=0, partner_ieee=ff:ff:ff:ff:ff:ff:ff:ff), tc_link_key=Key(key=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx, tx_counter=729088, rx_counter=0, seq=0, partner_ieee=00:0d:6f:00:0a:ff:73:23), key_table=[], children=[], nwk_addresses={}, stack_specific={'ezsp': {'hashed_tclk': '18a6ded1cc62ab3890d134ee14ae4374'}}, metadata={'ezsp': {'stack_version': 10, 'can_burn_userdata_custom_eui64': False, 'can_rewrite_custom_eui64': False}}, source='bellows@0.38.4'), node_info=NodeInfo(nwk=0x0000, ieee=00:0d:6f:00:0a:ff:73:23, logical_type=<LogicalType.Coordinator: 0>, model=None, manufacturer=None, version='7.2.1.0 build 144'))
2024-05-26 12:19:00.531 DEBUG (MainThread) [zigpy.backups] Waiting for 86400s before backing up again

@downtownsj1973
Copy link

I'm having this same issue

@downtownsj1973
Copy link

Thank. you for your hard work team!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants