Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

store/tikv,executor: redesign the latch scheduler #7711

Merged
merged 13 commits into from
Oct 9, 2018

Conversation

tiancaiamao
Copy link
Contributor

What problem does this PR solve?

The old latch scheduler checks maxCommitTS on each slot, different key could hash to the same slot,
then the latch scheduler will report a fake transaction confliction.

What is changed and how it works?

Each key will have their own maxCommitTS, so hash collision will not result in a fake transaction confliction.

Check List

Tests

  • Unit test

Code changes

  • remove RefreshCommitTS
  • redesign latch structure

PTAL @coocood

Check maxCommitTS on each key, instead of each slot, so hash collision
will not lead to transaction retry.
@tiancaiamao
Copy link
Contributor Author

PTAL @coocood @disksing

find = n
break
}
// TODO: Invalidate old data.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot merge this PR before fixing this TODO. Because without it we will get an OOM.

@tiancaiamao
Copy link
Contributor Author

PTAL @disksing @coocood

@tiancaiamao
Copy link
Contributor Author

@disksing

@tiancaiamao
Copy link
Contributor Author

PTAL @disksing @coocood


func (l *latch) isEmpty() bool {
return l.waitingQueueHead == 0 && !l.hasMoreWaiting
next *node
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use list.List?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. list.List will make unnecessary allocation.

Use

type Element struct {

    // The value stored with this element.
    Value interface{}
    // contains filtered or unexported fields
}

is similar to

type node struct {
    Value *nodeValue
}
type nodeValue {
    slotID int
    key []byte
    maxCommitTS uint64
    value *Lock
}
  1. list.List is a doubly linked list, while a single linked list is sufficient here.
  2. list data struct is simple and common enough to implement


// Try to limits the memory usage.
if latch.count > latchListCount {
latch.recycle(lock.startTS)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should recyclebe moved into if find == nil {}?

for i := 0; i < len(latch.waiting); i++ {
waiting := latch.waiting[i]
if bytes.Compare(waiting.keys[waiting.acquiredCount], key) == 0 {
nextLock = waiting
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible that there are more than 1 Locks in the waiting list have the same key?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possible! you find a bug.
I should only wake up the first one.
Wake up the first one is in a FIFO manner, there are still room for improvement here to choose which one to wake up.

@disksing
Copy link
Contributor

Do we have any test/benchmark result?

@disksing
Copy link
Contributor

PTAL @zhangjinpeng1987

@tiancaiamao
Copy link
Contributor Author

I use a update_non_index test case from sysbench, the table schema:

CREATE TABLE `sbtest1` (
  `id` int(10) UNSIGNED NOT NULL AUTO_INCREMENT,
  `k` int(10) UNSIGNED NOT NULL DEFAULT '0',
  `c` char(120) NOT NULL DEFAULT '',
  `pad` char(60) NOT NULL DEFAULT '',
  PRIMARY KEY (`id`),
  KEY `k_1` (`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin AUTO_INCREMENT=30001

key range of k is 16, test concurrency with 256, so there will be a lot of conflicts during the update operation.

When latch scheduler is disabled:

sysbench --test=./lua-tests/db/update_non_index.lua --db-driver=mysql --mysql-host=127.0.0.1 --mysql-port=4000 --mysql-user=root --mysql-password= --mysql-db=sbtest1 --oltp-tables-count=1 --oltp-table-size=16 --num-threads=256 --report-interval=10 --max-requests=2000000000 --percentile=95 --max-time=300 run
WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
WARNING: --num-threads is deprecated, use --threads instead
WARNING: --max-requests is deprecated, use --events instead
WARNING: --max-time is deprecated, use --time instead
sysbench 1.0.11 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 256
Report intermediate results every 10 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 10s ] thds: 256 tps: 6.10 qps: 6.10 (r/w/o: 0.00/6.10/0.00) lat (ms,95%): 8638.96 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 256 tps: 2.60 qps: 2.60 (r/w/o: 0.00/2.60/0.00) lat (ms,95%): 19078.64 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 256 tps: 3.10 qps: 3.10 (r/w/o: 0.00/3.10/0.00) lat (ms,95%): 27846.48 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 256 tps: 3.60 qps: 3.60 (r/w/o: 0.00/3.60/0.00) lat (ms,95%): 38506.38 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 256 tps: 5.00 qps: 5.00 (r/w/o: 0.00/5.00/0.00) lat (ms,95%): 49546.69 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 256 tps: 10.90 qps: 10.90 (r/w/o: 0.00/10.90/0.00) lat (ms,95%): 56202.50 err/s: 0.00 reconn/s: 0.00
[ 70s ] thds: 256 tps: 1.20 qps: 1.20 (r/w/o: 0.00/1.20/0.00) lat (ms,95%): 66090.17 err/s: 0.00 reconn/s: 0.00
[ 80s ] thds: 256 tps: 1.50 qps: 1.50 (r/w/o: 0.00/1.50/0.00) lat (ms,95%): 76330.47 err/s: 0.00 reconn/s: 0.00
[ 90s ] thds: 256 tps: 1.80 qps: 1.80 (r/w/o: 0.00/1.80/0.00) lat (ms,95%): 82031.09 err/s: 0.00 reconn/s: 0.00
[ 100s ] thds: 256 tps: 1.70 qps: 1.70 (r/w/o: 0.00/1.70/0.00) lat (ms,95%): 96462.77 err/s: 0.00 reconn/s: 0.00
[ 110s ] thds: 256 tps: 1.40 qps: 1.40 (r/w/o: 0.00/1.40/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 120s ] thds: 256 tps: 1.70 qps: 1.70 (r/w/o: 0.00/1.70/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 130s ] thds: 256 tps: 1.90 qps: 1.90 (r/w/o: 0.00/1.90/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 140s ] thds: 256 tps: 2.60 qps: 2.60 (r/w/o: 0.00/2.60/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 150s ] thds: 256 tps: 2.10 qps: 2.10 (r/w/o: 0.00/2.10/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 160s ] thds: 256 tps: 1.50 qps: 1.50 (r/w/o: 0.00/1.50/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 170s ] thds: 256 tps: 1.50 qps: 1.50 (r/w/o: 0.00/1.50/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 180s ] thds: 256 tps: 1.30 qps: 1.30 (r/w/o: 0.00/1.30/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 190s ] thds: 256 tps: 1.20 qps: 1.20 (r/w/o: 0.00/1.20/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 200s ] thds: 256 tps: 1.60 qps: 1.60 (r/w/o: 0.00/1.60/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 210s ] thds: 256 tps: 1.80 qps: 1.80 (r/w/o: 0.00/1.80/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 220s ] thds: 256 tps: 1.80 qps: 1.80 (r/w/o: 0.00/1.80/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 230s ] thds: 256 tps: 1.90 qps: 1.90 (r/w/o: 0.00/1.90/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 240s ] thds: 256 tps: 1.90 qps: 1.90 (r/w/o: 0.00/1.90/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 250s ] thds: 256 tps: 1.10 qps: 1.10 (r/w/o: 0.00/1.10/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 260s ] thds: 256 tps: 1.70 qps: 1.70 (r/w/o: 0.00/1.70/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 270s ] thds: 256 tps: 1.90 qps: 1.90 (r/w/o: 0.00/1.90/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 280s ] thds: 256 tps: 1.70 qps: 1.70 (r/w/o: 0.00/1.70/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 290s ] thds: 256 tps: 1.40 qps: 1.40 (r/w/o: 0.00/1.40/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 300s ] thds: 256 tps: 3.30 qps: 3.30 (r/w/o: 0.00/3.30/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 310s ] thds: 256 tps: 1.00 qps: 1.00 (r/w/o: 0.00/1.00/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 320s ] thds: 256 tps: 1.10 qps: 1.10 (r/w/o: 0.00/1.10/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 330s ] thds: 256 tps: 1.10 qps: 1.10 (r/w/o: 0.00/1.10/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 340s ] thds: 256 tps: 1.30 qps: 1.30 (r/w/o: 0.00/1.30/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 350s ] thds: 256 tps: 1.20 qps: 1.20 (r/w/o: 0.00/1.20/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 360s ] thds: 256 tps: 1.30 qps: 1.30 (r/w/o: 0.00/1.30/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 370s ] thds: 256 tps: 1.40 qps: 1.40 (r/w/o: 0.00/1.40/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 380s ] thds: 256 tps: 1.80 qps: 1.80 (r/w/o: 0.00/1.80/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 390s ] thds: 256 tps: 1.90 qps: 1.90 (r/w/o: 0.00/1.90/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 400s ] thds: 256 tps: 2.60 qps: 2.60 (r/w/o: 0.00/2.60/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
[ 410s ] thds: 255 tps: 3.30 qps: 3.30 (r/w/o: 0.00/3.30/0.00) lat (ms,95%): 100000.00 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            0
        write:                           984
        other:                           0
        total:                           984
    transactions:                        984    (2.36 per sec.)
    queries:                             984    (2.36 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          417.4665s
    total number of events:              984

Latency (ms):
         min:                                  9.80
         avg:                              99167.12
         max:                             416923.60
         95th percentile:                 100000.00
         sum:                            97580442.66

Threads fairness:
    events (avg/stddev):           3.8438/3.03
    execution time (avg/stddev):   381.1736/34.79

When latch scheduler is enabled:

sysbench --test=./lua-tests/db/update_non_index.lua --db-driver=mysql --mysql-host=127.0.0.1 --mysql-port=4000 --mysql-user=root --mysql-password= --mysql-db=sbtest1 --oltp-tables-count=1 --oltp-table-size=16 --num-threads=256 --report-interval=10 --max-requests=2000000000 --percentile=95 --max-time=300 run
WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
WARNING: --num-threads is deprecated, use --threads instead
WARNING: --max-requests is deprecated, use --events instead
WARNING: --max-time is deprecated, use --time instead
sysbench 1.0.11 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 256
Report intermediate results every 10 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 10s ] thds: 256 tps: 122.57 qps: 122.57 (r/w/o: 0.00/122.57/0.00) lat (ms,95%): 5607.61 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 256 tps: 110.10 qps: 110.10 (r/w/o: 0.00/110.10/0.00) lat (ms,95%): 9624.59 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 256 tps: 107.91 qps: 107.91 (r/w/o: 0.00/107.91/0.00) lat (ms,95%): 10722.67 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 256 tps: 104.80 qps: 104.80 (r/w/o: 0.00/104.80/0.00) lat (ms,95%): 11115.87 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 256 tps: 103.80 qps: 103.80 (r/w/o: 0.00/103.80/0.00) lat (ms,95%): 11523.48 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 256 tps: 120.90 qps: 120.90 (r/w/o: 0.00/120.90/0.00) lat (ms,95%): 9977.52 err/s: 0.00 reconn/s: 0.00
[ 70s ] thds: 256 tps: 111.30 qps: 111.30 (r/w/o: 0.00/111.30/0.00) lat (ms,95%): 10917.50 err/s: 0.00 reconn/s: 0.00
[ 80s ] thds: 256 tps: 105.40 qps: 105.40 (r/w/o: 0.00/105.40/0.00) lat (ms,95%): 9799.46 err/s: 0.00 reconn/s: 0.00
[ 90s ] thds: 256 tps: 104.40 qps: 104.40 (r/w/o: 0.00/104.40/0.00) lat (ms,95%): 10722.67 err/s: 0.00 reconn/s: 0.00
[ 100s ] thds: 256 tps: 122.70 qps: 122.70 (r/w/o: 0.00/122.70/0.00) lat (ms,95%): 10722.67 err/s: 0.00 reconn/s: 0.00
[ 110s ] thds: 256 tps: 108.00 qps: 108.00 (r/w/o: 0.00/108.00/0.00) lat (ms,95%): 9977.52 err/s: 0.00 reconn/s: 0.00
[ 120s ] thds: 256 tps: 125.41 qps: 125.41 (r/w/o: 0.00/125.41/0.00) lat (ms,95%): 10531.32 err/s: 0.00 reconn/s: 0.00
[ 130s ] thds: 256 tps: 128.40 qps: 128.40 (r/w/o: 0.00/128.40/0.00) lat (ms,95%): 8955.74 err/s: 0.00 reconn/s: 0.00
[ 140s ] thds: 256 tps: 120.60 qps: 120.60 (r/w/o: 0.00/120.60/0.00) lat (ms,95%): 9624.59 err/s: 0.00 reconn/s: 0.00
[ 150s ] thds: 256 tps: 126.00 qps: 126.00 (r/w/o: 0.00/126.00/0.00) lat (ms,95%): 9452.83 err/s: 0.00 reconn/s: 0.00
[ 160s ] thds: 256 tps: 121.10 qps: 121.10 (r/w/o: 0.00/121.10/0.00) lat (ms,95%): 9624.59 err/s: 0.00 reconn/s: 0.00
[ 170s ] thds: 256 tps: 117.50 qps: 117.50 (r/w/o: 0.00/117.50/0.00) lat (ms,95%): 9799.46 err/s: 0.00 reconn/s: 0.00
[ 180s ] thds: 256 tps: 109.60 qps: 109.60 (r/w/o: 0.00/109.60/0.00) lat (ms,95%): 10531.32 err/s: 0.00 reconn/s: 0.00
[ 190s ] thds: 256 tps: 111.00 qps: 111.00 (r/w/o: 0.00/111.00/0.00) lat (ms,95%): 10531.32 err/s: 0.00 reconn/s: 0.00
[ 200s ] thds: 256 tps: 118.10 qps: 118.10 (r/w/o: 0.00/118.10/0.00) lat (ms,95%): 9799.46 err/s: 0.00 reconn/s: 0.00
[ 210s ] thds: 256 tps: 118.60 qps: 118.60 (r/w/o: 0.00/118.60/0.00) lat (ms,95%): 9284.15 err/s: 0.00 reconn/s: 0.00
[ 220s ] thds: 256 tps: 114.90 qps: 114.90 (r/w/o: 0.00/114.90/0.00) lat (ms,95%): 10722.67 err/s: 0.00 reconn/s: 0.00
[ 230s ] thds: 256 tps: 117.69 qps: 117.69 (r/w/o: 0.00/117.69/0.00) lat (ms,95%): 10722.67 err/s: 0.00 reconn/s: 0.00
[ 240s ] thds: 256 tps: 118.61 qps: 118.61 (r/w/o: 0.00/118.61/0.00) lat (ms,95%): 9799.46 err/s: 0.00 reconn/s: 0.00
[ 250s ] thds: 256 tps: 106.78 qps: 106.78 (r/w/o: 0.00/106.78/0.00) lat (ms,95%): 9284.15 err/s: 0.00 reconn/s: 0.00
[ 260s ] thds: 256 tps: 102.20 qps: 102.20 (r/w/o: 0.00/102.20/0.00) lat (ms,95%): 12384.09 err/s: 0.00 reconn/s: 0.00
[ 270s ] thds: 256 tps: 106.02 qps: 106.02 (r/w/o: 0.00/106.02/0.00) lat (ms,95%): 10722.67 err/s: 0.00 reconn/s: 0.00
[ 280s ] thds: 256 tps: 108.10 qps: 108.10 (r/w/o: 0.00/108.10/0.00) lat (ms,95%): 11115.87 err/s: 0.00 reconn/s: 0.00
[ 290s ] thds: 256 tps: 106.10 qps: 106.10 (r/w/o: 0.00/106.10/0.00) lat (ms,95%): 10917.50 err/s: 0.00 reconn/s: 0.00
[ 300s ] thds: 256 tps: 105.20 qps: 105.20 (r/w/o: 0.00/105.20/0.00) lat (ms,95%): 9799.46 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            0
        write:                           34294
        other:                           0
        total:                           34294
    transactions:                        34294  (113.18 per sec.)
    queries:                             34294  (113.18 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          302.9876s
    total number of events:              34294

Latency (ms):
         min:                                  7.52
         avg:                               2249.82
         max:                              47590.55
         95th percentile:                  10343.39
         sum:                            77155206.07

Threads fairness:
    events (avg/stddev):           133.9609/20.42
    execution time (avg/stddev):   301.3875/0.75

There will be near 100x performance improment.

@disksing

@@ -40,13 +44,30 @@ func NewScheduler(size uint) *LatchesScheduler {
return scheduler
}

const checkInterval = 10 * time.Minute
const expireDuration = 2 * time.Hour
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Too long. Be aware that now a transaction can last for at most 10 minutes. So we should be safe to clean up commit history before 10 minutes.


// Handle the head node.
if tsoSub(currentTS, l.queue.maxCommitTS) >= expireDuration && l.queue.value == nil {
l.queue = nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No count--?

if tsoSub(currentTS, l.queue.maxCommitTS) >= expireDuration && l.queue.value == nil {
l.queue = nil
}
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need return

}

prev := l.queue
curr := l.queue.next
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better initialize as prev = nil, curr = l.queue, so don't have to handle head node separately. Or you can consider using the indirect pointer hack suggested by Linus Torvalds https://medium.com/@bartobri/applying-the-linus-tarvolds-good-taste-coding-requirement-99749f37684a

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I found that trick myself before I saw any article about it. (I remember I wrote a blog post http://www.zenlife.tk/fake-list-head.md around that time)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Admire you.

@disksing
Copy link
Contributor

LGTM.

type latch struct {
queue *node
count int
waiting []*Lock
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not each node has a waiting queue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Waiting queue is moved from each node to the latch for those reasons:

  1. nodes in the queue is inserted every now and then, if each node has is waiting queue, the queue would be created and destroyed. There will be more allocations, and it's less memory efficient.
  2. I have an assumption that the waiting queue would not be large, when a list is small enough, an array is very efficient.
  3. You may still remember the "first waiting one automatically become running" problem and the old code is complex enough to handle different states. If each node doesn't have the waiting queue, the problem could be avoid.
    @zhangjinpeng1987

if idx < len(latch.waiting) {
nextLock = latch.waiting[idx]
// Delete element latch.waiting[idx] from the array.
copy(latch.waiting[idx:], latch.waiting[idx+1:])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's better to use a list for waiting locks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coocood What's your advise?

@zhangjinpeng87
Copy link
Contributor

@tiancaiamao Have you tested sysbench OLTP_RW-pareto scenario?

Copy link
Contributor

@zhangjinpeng87 zhangjinpeng87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tiancaiamao
Copy link
Contributor Author

/run-all-tests

@tiancaiamao tiancaiamao merged commit c19f8fb into pingcap:master Oct 9, 2018
@tiancaiamao tiancaiamao deleted the latch branch October 9, 2018 08:31
tiancaiamao added a commit to tiancaiamao/tidb that referenced this pull request Oct 10, 2018
Check maxCommitTS on each key, instead of each slot, so hash collision
will not lead to transaction retry.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants