Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deadlock when writing into Badger using multiple routines #1032

Closed
tony2001 opened this issue Sep 10, 2019 · 18 comments · Fixed by #1070
Closed

deadlock when writing into Badger using multiple routines #1032

tony2001 opened this issue Sep 10, 2019 · 18 comments · Fixed by #1070
Assignees
Labels
area/crash This issue causes a panic or some other of exception that causes a crash. kind/bug Something is broken. priority/P1 Serious issue that requires eventual attention (can wait a bit) status/accepted We accept to investigate or work on it.

Comments

@tony2001
Copy link

go version go1.13 linux/amd64
github.com/dgraph-io/badger v2.0.0-rc3+incompatible
I have to admit I didn't try latest master, but I've seen this issue since early versions of Badger 2.0 and it's still there with rc3.
The server is running on 56-core Intel Xeon E5-2660, no idea about the disk.

I'm reading several multi-gigabyte files in their own binary format and "converting" them into Badger, writing data encoded with Protobuf. The initial data is organized in batches (of arbitrary size, from 1 to 1000+ records in a batch), so I'm using WriteBatches to write the actual data and then badger.Update() to update the counters (stored in the same badger db). To speed up the process I'm using 64 goroutines that listen on channel with data and do the actual encoding/writing.

The problem is that after some time the badger deadlocks and stops writing anything, the process is stuck with the backtraces provided below.
As you can see from the backtrace 3, I tried to use Txn.Set/Commit directly instead of using badger.Update, but it didn't help.
Unfortunately, I'm unable to provide a reproduce code so far, as the issue seems to be reproducible only on really large amounts of data after a couple of hours of running with no error messages whatsoever. To make it more complicated, I'm unable to reproduce it in 100% of cases even using the same initial data and the same Go code.

My code looks very much like this (in simplified go "pseudocode"):

//this is the function that's called from 64 worker routines
func LoadData(records []*Record) error { 
    wb := badger.NewWriteBatch()
    defer wb.Cancel()
    for ... {
           err = wb.Set(key, value)
           if err != nil {
               return err
           }
    }
    err = wb.Flush()
    if err == nil {
        updateStats(..)
    }
    return err
}

func updateStats(...) error {
    statsLock.Lock()
    defer statsLock.Unlock()
    stats += value
    //...serialize value into val
    txn := badger.NewTransaction(true)
    defer txn.Discard()

    err = txn.Set(key, val)
    if err != nil {
        return err
    }

    err = txn.Commit()
    if err != nil {
        return err
    }
    return err
}

Backtraces:
1.

6 @ 0x43d600 0x44dcc0 0x44dcab 0x44d912 0x4762e4 0x976709 0x474c83 0x975cb2 0x975c5e 0x985549 0xa16b0b 0xa800ac 0x46b381
#	0x44d911	sync.runtime_Semacquire+0x41					/home/tony/go/src/runtime/sema.go:56
#	0x4762e3	sync.(*WaitGroup).Wait+0x63					/home/tony/go/src/sync/waitgroup.go:130
#	0x976708	github.com/dgraph-io/badger/y.(*Throttle).Finish.func1+0x38	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/y/y.go:292
#	0x474c82	sync.(*Once).doSlow+0xe2					/home/tony/go/src/sync/once.go:66
#	0x975cb1	sync.(*Once).Do+0x71						/home/tony/go/src/sync/once.go:57
#	0x975c5d	github.com/dgraph-io/badger/y.(*Throttle).Finish+0x1d		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/y/y.go:291
#	0x985548	github.com/dgraph-io/badger.(*WriteBatch).Flush+0x78		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/batch.go:159
#	0xa16b0a	go.badoo.dev/meetlist/db.(*DB).AddRomancesFromSnapshot+0x88a	/local/eye/git/meetlist/db/db.go:691
#	0xa800ab	main.snapshotWorker+0x24b					/local/eye/git/meetlist/main.go:617
6 @ 0x43d600 0x44dcc0 0x44dcab 0x44d912 0x4762e4 0x9b5ff1 0x9bfbc0 0x9ae911 0x46b381
#	0x44d911	sync.runtime_Semacquire+0x41					/home/tony/go/src/runtime/sema.go:56
#	0x4762e3	sync.(*WaitGroup).Wait+0x63					/home/tony/go/src/sync/waitgroup.go:130
#	0x9b5ff0	github.com/dgraph-io/badger.(*request).Wait+0x30		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/value.go:938
#	0x9bfbbf	github.com/dgraph-io/badger.(*Txn).commitAndSend.func1+0x3f	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/txn.go:501
#	0x9ae910	github.com/dgraph-io/badger.runTxnCallback+0x50			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/txn.go:572
1 @ 0x43d600 0x44d15b 0x97485c 0x9ac2d0 0x9aedc7 0xa13f8c 0xa13f72 0xa14406 0xa16b74 0xa800ac 0x46b381
#	0x97485b	github.com/dgraph-io/badger/y.(*WaterMark).WaitForMark+0x13b	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/y/watermark.go:124
#	0x9ac2cf	github.com/dgraph-io/badger.(*oracle).readTs+0xdf		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/txn.go:119
#	0x9aedc6	github.com/dgraph-io/badger.(*DB).newTransaction+0xc6		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/txn.go:666
#	0xa13f8b	github.com/dgraph-io/badger.(*DB).NewTransaction+0x10b		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/txn.go:634
#	0xa13f71	go.badoo.dev/meetlist/db.(*DB).writeStatsEntry+0xf1		/local/eye/git/meetlist/db/db.go:154
#	0xa14405	go.badoo.dev/meetlist/db.(*DB).updateStats+0xb5			/local/eye/git/meetlist/db/db.go:181
#	0xa16b73	go.badoo.dev/meetlist/db.(*DB).AddRomancesFromSnapshot+0x8f3	/local/eye/git/meetlist/db/db.go:694
#	0xa800ab	main.snapshotWorker+0x24b					/local/eye/git/meetlist/main.go:617

There are some more routines with their own traces, dunno if they are related to the issue or not:

1 @ 0x43d600 0x44d15b 0x98b827 0x46b381
#	0x98b826	github.com/dgraph-io/badger.(*DB).doWrites+0x2d6	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/db.go:734

1 @ 0x43d600 0x44d15b 0x98d800 0x46b381
#	0x98d7ff	github.com/dgraph-io/badger.(*DB).updateSize+0x15f	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/db.go:1021

1 @ 0x43d600 0x44d15b 0x9a5eff 0x46b381
#	0x9a5efe	github.com/dgraph-io/badger.(*publisher).listenForUpdates+0x17e	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/publisher.go:67

1 @ 0x43d600 0x44dcc0 0x44dcab 0x44da27 0x47483c 0x9b8c6f 0x9b8aab 0x99c9ce 0x99e556 0x99ef97 0x99a1d9 0x46b381
#	0x44da26	sync.runtime_SemacquireMutex+0x46						/home/tony/go/src/runtime/sema.go:71
#	0x47483b	sync.(*Mutex).lockSlow+0xfb							/home/tony/go/src/sync/mutex.go:138
#	0x9b8c6e	sync.(*Mutex).Lock+0x1fe							/home/tony/go/src/sync/mutex.go:81
#	0x9b8aaa	github.com/dgraph-io/badger.(*valueLog).updateDiscardStats+0x3a			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/value.go:1385
#	0x99c9cd	github.com/dgraph-io/badger.(*levelsController).compactBuildTables+0x208d	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/levels.go:636
#	0x99e555	github.com/dgraph-io/badger.(*levelsController).runCompactDef+0xc5		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/levels.go:791
#	0x99ef96	github.com/dgraph-io/badger.(*levelsController).doCompact+0x4b6			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/levels.go:860
#	0x99a1d8	github.com/dgraph-io/badger.(*levelsController).runWorker+0x318			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/levels.go:356

1 @ 0x43d600 0x44dcc0 0x44dcab 0x44da27 0x47483c 0x9b8f80 0x9b8cc0 0x9b8c02 0x99c9ce 0x99e556 0x99ef97 0x99a1d9 0x46b381
#	0x44da26	sync.runtime_SemacquireMutex+0x46						/home/tony/go/src/runtime/sema.go:71
#	0x47483b	sync.(*Mutex).lockSlow+0xfb							/home/tony/go/src/sync/mutex.go:138
#	0x9b8f7f	sync.(*Mutex).Lock+0x2ef							/home/tony/go/src/sync/mutex.go:81
#	0x9b8cbf	github.com/dgraph-io/badger.(*valueLog).flushDiscardStats+0x2f			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/value.go:1403
#	0x9b8c01	github.com/dgraph-io/badger.(*valueLog).updateDiscardStats+0x191		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/value.go:1393
#	0x99c9cd	github.com/dgraph-io/badger.(*levelsController).compactBuildTables+0x208d	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/levels.go:636
#	0x99e555	github.com/dgraph-io/badger.(*levelsController).runCompactDef+0xc5		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/levels.go:791
#	0x99ef96	github.com/dgraph-io/badger.(*levelsController).doCompact+0x4b6			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/levels.go:860
#	0x99a1d8	github.com/dgraph-io/badger.(*levelsController).runWorker+0x318			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/levels.go:356

1 @ 0x43d600 0x459807 0x4597dd 0x98af33 0x9bb65a 0x46b381
#	0x4597dc	time.Sleep+0x12c					/home/tony/go/src/runtime/time.go:105
#	0x98af32	github.com/dgraph-io/badger.(*DB).writeRequests+0x242	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/db.go:657
#	0x9bb659	github.com/dgraph-io/badger.(*DB).doWrites.func1+0x59	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/db.go:706

1 @ 0x43d600 0x459807 0x4597dd 0x99f7ab 0x98cf49 0x98d1bb 0x9bb1a7 0x46b381
#	0x4597dc	time.Sleep+0x12c							/home/tony/go/src/runtime/time.go:105
#	0x99f7aa	github.com/dgraph-io/badger.(*levelsController).addLevel0Table+0x37a	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/levels.go:907
#	0x98cf48	github.com/dgraph-io/badger.(*DB).handleFlushTask+0x7b8			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/db.go:925
#	0x98d1ba	github.com/dgraph-io/badger.(*DB).flushMemtable+0x16a			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/db.go:941
#	0x9bb1a6	github.com/dgraph-io/badger.Open.func4+0x36				/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v2.0.0-rc3+incompatible/db.go:303
@poonai
Copy link
Contributor

poonai commented Sep 10, 2019

Hey @tony2001, we fixed the deadlock issue. #976

Please check with the master.

@tony2001
Copy link
Author

Just tested it with 398445a and it still hangs here:

1 @ 0x43d600 0x44dcc0 0x44dcab 0x44d912 0x4762e4 0x9b5c21 0x9b8b43 0x9b87de 0x99c38e 0x99df16 0x99e957 0x999b99 0x46b381
#	0x44d911	sync.runtime_Semacquire+0x41							/home/tony/go/src/runtime/sema.go:56
#	0x4762e3	sync.(*WaitGroup).Wait+0x63							/home/tony/go/src/sync/waitgroup.go:130
#	0x9b5c20	github.com/dgraph-io/badger.(*request).Wait+0x30				/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190827230100-398445a29fa7/value.go:941
#	0x9b8b42	github.com/dgraph-io/badger.(*valueLog).flushDiscardStats+0x2c2			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190827230100-398445a29fa7/value.go:1426
#	0x9b87dd	github.com/dgraph-io/badger.(*valueLog).updateDiscardStats+0x13d		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190827230100-398445a29fa7/value.go:1397
#	0x99c38d	github.com/dgraph-io/badger.(*levelsController).compactBuildTables+0x208d	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190827230100-398445a29fa7/levels.go:636
#	0x99df15	github.com/dgraph-io/badger.(*levelsController).runCompactDef+0xc5		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190827230100-398445a29fa7/levels.go:791
#	0x99e956	github.com/dgraph-io/badger.(*levelsController).doCompact+0x4b6			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190827230100-398445a29fa7/levels.go:860
#	0x999b98	github.com/dgraph-io/badger.(*levelsController).runWorker+0x318			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190827230100-398445a29fa7/levels.go:356

@tony2001
Copy link
Author

Still waiting for test results with the latest master, but I guess it'll still fail.
I believe this should fix it:

diff --git a/value.go b/value.go
index 2478f19..892ad02 100644
--- a/value.go
+++ b/value.go
@@ -1447,9 +1447,9 @@ func (vlog *valueLog) updateDiscardStats(stats map[uint32]int64) error {
 // flushDiscardStats inserts discard stats into badger. Returns error on failure.
 func (vlog *valueLog) flushDiscardStats() error {
        vlog.lfDiscardStats.Lock()
-       defer vlog.lfDiscardStats.Unlock()
 
        if len(vlog.lfDiscardStats.m) == 0 {
+               vlog.lfDiscardStats.Unlock()
                return nil
        }
        entries := []*Entry{{
@@ -1462,11 +1462,14 @@ func (vlog *valueLog) flushDiscardStats() error {
                // When L0 compaction in close may push discard stats.
                // So ignoring it.
                // https://github.com/dgraph-io/badger/issues/970
+               vlog.lfDiscardStats.Unlock()
                return nil
        } else if err != nil {
+               vlog.lfDiscardStats.Unlock()
                return errors.Wrapf(err, "failed to push discard stats to write channel")
        }
        vlog.lfDiscardStats.updatesSinceFlush = 0
+       vlog.lfDiscardStats.Unlock()
        return req.Wait()
 }

@connorgorman
Copy link
Contributor

This commit fixes the issue on latest master:
398445a

@tony2001
Copy link
Author

@connorgorman latest master also deadlocks in the same place:


1 @ 0x43d600 0x44dcc0 0x44dcab 0x44d912 0x4762e4 0x9b5fd1 0x9b8ef3 0x9b8b8e 0x99c57e 0x99e136 0x99eb77 0x999b39 0x46b381
#	0x44d911	sync.runtime_Semacquire+0x41							/home/tony/go/src/runtime/sema.go:56
#	0x4762e3	sync.(*WaitGroup).Wait+0x63							/home/tony/go/src/sync/waitgroup.go:130
#	0x9b5fd0	github.com/dgraph-io/badger.(*request).Wait+0x30				/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190906051254-a1ff34882564/value.go:985
#	0x9b8ef2	github.com/dgraph-io/badger.(*valueLog).flushDiscardStats+0x2c2			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190906051254-a1ff34882564/value.go:1470
#	0x9b8b8d	github.com/dgraph-io/badger.(*valueLog).updateDiscardStats+0x13d		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190906051254-a1ff34882564/value.go:1441
#	0x99c57d	github.com/dgraph-io/badger.(*levelsController).compactBuildTables+0x22dd	/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190906051254-a1ff34882564/levels.go:636
#	0x99e135	github.com/dgraph-io/badger.(*levelsController).runCompactDef+0xc5		/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190906051254-a1ff34882564/levels.go:791
#	0x99eb76	github.com/dgraph-io/badger.(*levelsController).doCompact+0x4b6			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190906051254-a1ff34882564/levels.go:860
#	0x999b38	github.com/dgraph-io/badger.(*levelsController).runWorker+0x318			/home/tony/go-packages/pkg/mod/github.com/dgraph-io/badger@v1.6.1-0.20190906051254-a1ff34882564/levels.go:356

@poonai
Copy link
Contributor

poonai commented Sep 11, 2019

Hi @tony2001,

Thanks for checking it out. do you have any snippet which could reproduce this issue?

It'll be very helpful to debug :)

@tony2001
Copy link
Author

The patch above still makes sense to me (release the lock before Wait()'ing), but it didn't fix the issue either.

@tony2001
Copy link
Author

Unfortunately, no, I don't have a reproduce case and it takes more than 2 hours to reproduce it.
I can provide full backtraces, would that help in any way?

@poonai
Copy link
Contributor

poonai commented Sep 11, 2019

Yeah, please share it.

@tony2001
Copy link
Author

https://gist.github.com/tony2001/0e97a7bb36b97970cf401394e5362b93 all routines backtraces for badger/master with the patch above ^^

@tony2001
Copy link
Author

So it looks like this (correct me if I'm wrong):

  • flusher needs to update stats stored in the db
  • flusher sends a request to the writer and is waiting on the request to be done
    (see updateDiscardStats() in value.go)
  • writer (writeRequests() in db.go) is trying to execute the requests, but finds out that it doesn't have enough room to write them to the disk, so it just Sleep()'s in a cycle waiting for the flusher to free some more room
  • flusher and writer wait for each other to complete

@ashish-goswami
Copy link
Contributor

Hey @tony2001, are you running Badger with DefaultOptions? If not, can you share the options struct with us?

@tony2001
Copy link
Author

These are the options I use. I also use our own Log interface, but that shouldn't matter at all.

    opts := badger.DefaultOptions(config.Dir)
    opts.SyncWrites = false
    opts.TableLoadingMode = options.MemoryMap
    opts.ValueLogLoadingMode = options.MemoryMap

@tony2001
Copy link
Author

Any news on this? Deadlock (or endless cycle) looks fairly critical to me

@jarifibrahim jarifibrahim added area/crash This issue causes a panic or some other of exception that causes a crash. kind/bug Something is broken. priority/P1 Serious issue that requires eventual attention (can wait a bit) status/accepted We accept to investigate or work on it. labels Sep 20, 2019
@poonai
Copy link
Contributor

poonai commented Sep 23, 2019

@tony2001 we found the issue. we're working on the fix. Will update you after it gets merged. :)

ashish-goswami added a commit that referenced this issue Oct 16, 2019
Fixes #1032

Currently discardStats flow is as follows:
* Discard Stats are generated during compaction. At the end, compaction routine updates these stats in vlog(vlog maintains all discard stats). If number of updates exceeds a threshold, a new request is generated and sent to write channel. Routine waits for request to complete(request.Wait()).
* Requests are consumed from write channel and written to vlog first and then to memtable.
* If memtable is full, it is flushed to flush channel.
*From flush channel, memtables are written to L0 only if there are less than or equal to NumLevelZeroTablesStall tables already.

Events which can lead to deadlock:
Compaction is running on L0 which has NumLevelZeroTablesStall tables currently and tries to flush discard stats to write channel. After pushing stats to write channel, it waits for write request to complete, which cannot be completed due to cyclic dependency.

Fix:
This PR introduces a flush channel(buffered) for discardStats. Compaction routine, will push generated discard stats to flush channel, if channel is full it just returns. This decouples compaction and writes. We have a separate routine for consuming stats from flush chan.
@ashish-goswami
Copy link
Contributor

@tony2001 I have merged fix in master, please verify if change fixes the deadlock.

@jarifibrahim
Copy link
Contributor

jarifibrahim commented Oct 23, 2019

Hey @tony2001 did you get a chance to try the fix?

@tony2001
Copy link
Author

Unfortunately, no. We've solved the task with a different storage engine, so I'll have to implement a synthetic test to try it.

jarifibrahim pushed a commit that referenced this issue Mar 12, 2020
Fixes #1032

Currently discardStats flow is as follows:
* Discard Stats are generated during compaction. At the end, compaction routine updates these stats in vlog(vlog maintains all discard stats). If number of updates exceeds a threshold, a new request is generated and sent to write channel. Routine waits for request to complete(request.Wait()).
* Requests are consumed from write channel and written to vlog first and then to memtable.
* If memtable is full, it is flushed to flush channel.
*From flush channel, memtables are written to L0 only if there are less than or equal to NumLevelZeroTablesStall tables already.

Events which can lead to deadlock:
Compaction is running on L0 which has NumLevelZeroTablesStall tables currently and tries to flush discard stats to write channel. After pushing stats to write channel, it waits for write request to complete, which cannot be completed due to cyclic dependency.

Fix:
This PR introduces a flush channel(buffered) for discardStats. Compaction routine, will push generated discard stats to flush channel, if channel is full it just returns. This decouples compaction and writes. We have a separate routine for consuming stats from flush chan.

(cherry picked from commit c1cf0d7)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/crash This issue causes a panic or some other of exception that causes a crash. kind/bug Something is broken. priority/P1 Serious issue that requires eventual attention (can wait a bit) status/accepted We accept to investigate or work on it.
Development

Successfully merging a pull request may close this issue.

5 participants