-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sql: drop table and recreate fail #7348
Comments
Are you running the |
@andreimatei I'm not using the transaction actually, and after the statement, all the nodes got errors. I used to catch this situation when I drop a table with 1m+ records and I thought the reason lies in the table size, but now I meet the same error for table of only 10k+ size. |
Hmmm this "shouldn't happen" without a txn. I'll try to reproduce. Or if On Tue, Jun 21, 2016 at 2:12 AM, idsj notifications@github.com wrote:
|
I believe this is the intended behavior right. The name is available for On Tue, Jun 21, 2016, 12:10 PM Andrei Matei notifications@github.com
|
Right, but the table is truncated and "fully deleted" by the schema changer, which (when run through the "synchronous path") blocks the client executing the |
@andreimatei root@192.168.181.91:26257> create database test; /* root@192.168.181.91:26257> show tables; and the node(192.168.181.93) got error. |
I couldn't manage to reproduce this. @idsj would you mind confirming you're running a recent build of cockroach? When you launch it with
Also, can you confirm that the Also also would you mind trying to reproduce it using a single node cluster, just to see if that changes anything? Thanks! |
@andreimatei yes,it execute a few seconds. Did you reproduce it using the cluster with VM? It did not happen in VM environment,i have done too. It happened in the physical machine environment. |
OK, managed to repro using three nodes. Looking. |
For an update here, a number of problems have been made apparent by this. There's an outright bug that would sometimes lead to this behavior that's being fixed in #7504. Ways to avoid a huge transaction for truncation are being discussed here: #7499 |
We should fix this by renaming the table before dropping it. |
When dropping a table, the table name would be released only after the table was truncated. Table truncation can take a long time and it doesn't make for a good user experience when a user expects a name for a dropped table to be available almost immediately. fixes cockroachdb#7348
When dropping a table, the table name would be released only after the table was truncated. Table truncation can take a long time and it doesn't make for a good user experience when a user expects a name for a dropped table to be available almost immediately. fixes cockroachdb#7348
When dropping a table, the table name would be released only after the table was truncated. Table truncation can take a long time and it doesn't make for a good user experience when a user expects a name for a dropped table to be available almost immediately. fixes cockroachdb#7348
Please follow the steps below to help us help you.
I160621 11:06:31.940101 util/log/clog.go:998 [config] file created at: 2016/06/21 11:06:31
I160621 11:06:31.940101 util/log/clog.go:998 [config] running on machine: longong-dsj-kv1
I160621 11:06:31.940101 util/log/clog.go:998 [config] binary: CockroachDB beta-20160616 (linux amd64, built 2016/06/16 16:51:45, go1.6.2)
I160621 11:06:31.940101 util/log/clog.go:998 [config] arguments: [./cockroach start --host=192.168.181.91 --insecure --join=192.168.181.92:26258]
e.g
root@192.168.181.91:26257> drop table test_serial;
DROP TABLE
root@192.168.181.91:26257> select count(*) from test_serial;
pq: table is being deleted
root@192.168.181.91:26257> show tables;
+-------------+
| Table |
+-------------+
| test_serial |
+-------------+
create a new table with same name
i can not create a new table with same name
e.g
root@192.168.181.91:26257> create table test_serial(a serial primary key,b string(30),c bool);
pq: table "test_serial" already exists
The text was updated successfully, but these errors were encountered: