Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: transactions containing certain DDL statements are not atomic #42061

Open
knz opened this issue Oct 31, 2019 · 28 comments
Open

sql: transactions containing certain DDL statements are not atomic #42061

knz opened this issue Oct 31, 2019 · 28 comments
Labels
A-schema-changes A-schema-descriptors Relating to SQL table/db descriptor handling. C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. S-3-ux-surprise Issue leaves users wondering whether CRDB is behaving properly. Likely to hurt reputation/adoption. T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions) X-anchored-telemetry The issue number is anchored by telemetry references.

Comments

@knz
Copy link
Contributor

knz commented Oct 31, 2019

Related to #26508, #24785.
Filing this in reaction to a recent thread with @mberhault

Today, CockroachDB will accept to process DDL statements inside explicit txns (BEGIN...COMMIT), but process them after the non-DDL statements have been committed successfully.

If the DDL subsequently fails, the transaction is then both partially committed and partially rolled back. i.e. an atomicity violation.

If/when this happens, is currently reported to the client upon COMMIT via the special error code XXA00 (TransactionCommittedWithSchemaChangeFailure)

There is a discussion to be had about whether the situation should be allowed to occur at all, but until it does this issue will exist so it can be linked from the source code and docs for explanatory purposes.

Jira issue: CRDB-5394

@knz knz added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. S-3-ux-surprise Issue leaves users wondering whether CRDB is behaving properly. Likely to hurt reputation/adoption. A-schema-descriptors Relating to SQL table/db descriptor handling. A-schema-changes X-anchored-telemetry The issue number is anchored by telemetry references. labels Oct 31, 2019
knz added a commit to knz/cockroach that referenced this issue Oct 31, 2019
CockroachDB currently allows an explicit BEGIN..COMMIT txn containing
DDL to fail atomicity in certain cases. These cases are detected
during COMMIT and reported to the client as error XXA00.

This patch extends the error message to link to issue cockroachdb#42061 which
describes and handles this situation further.

Release note (sql change): the error message generated when a
txn containing DDL is both partially committed and partially rolled back
(error XXA00) now contains a link to github issue cockroachdb#42061 where this
situation is discussed further.
@knz
Copy link
Contributor Author

knz commented Oct 31, 2019

Proposing a session flag strict_ddl_atomicity to control this in PR #42063

@knz
Copy link
Contributor Author

knz commented Oct 31, 2019

Perhaps related (but unlikely): #42064

knz added a commit to knz/cockroach that referenced this issue Oct 31, 2019
Previously, CockroachDB always accepted DDL inside explicit
txns (BEGIN...COMMIT) and would issue some of these DDL statements as
asynchronous jobs after the rest of the txn was committed
successfully. If that DDL subsequently failed, a client would also get
error XXA00 to signal this error. However at that point the txn also
would be both partially committed and partially rolled back. This is
in effect an atomicity violation.

It was considered to be acceptable because in practice most clients
using DDL inside BEGIN...COMMIT do not, in fact, expect ACID semantics
in those cases. So it was considered OK to not deliver ACID semantics
in certain cases.

Except that we now have clients that do want us to "do the right
thing" and never allow a situation that _could_ yield atomicity
violations.

This patch makes a step in this direction by introducing a new session
variable `strict_ddl_atomicity`, which defaults to `false` for
compatibility with existing CockroachDB 19.2 clients and that of
previous CockroachDB versions.

When this setting is enabled / set to `true`, any DDL that would
be otherwise be processed asynchronously and risk triggering error
XXA00 is now fully disallowed, with the following error:

```
pq: unimplemented: cannot run this DDL statement inside BEGIN..COMMIT as its atomicity cannot be guaranteed
HINT: You have attempted to use a feature that is not yet implemented.
See: cockroachdb#42061
--
You can set the session variable 'strict_ddl_atomicity' to false if you are willing to accept atomicity violations.
```

Other DDL statements without async processing are still allowed
without triggering that error.

Examples of rejected DDL include but are not limited to: CREATE
INDEX, ALTER ADD CONSTRAINT, ADD COLUMN with DEFAULT, DROP INDEX.

Examples of DDL that's still processed: RENAME COLUMN, RENAME TABLE,
ALTER INDEX CONFIGURE ZONE.

Release note (sql change): a SQL client can now request strict
atomicity for mixed DDL/DML transactions with the new session variable
`strict_ddl_atomicity`, which defaults to `false`. When this variable
is set to `true`, CockroachDB will refuse to accept processing those
specific DDL statements inside BEGIN...COMMIT for which it cannot
guarantee atomic processing (other DDL statements are still allowed).
@knz
Copy link
Contributor Author

knz commented Oct 31, 2019

I created an inventory of all the currently recognized DDL and how they are affected by this error:

DDL Sync part Async part Async may fail and yield XXA00? Constraint-like DDL which can be VALIDATEd
CREATE USER insert row none N/A N/A
CREATE ROLE insert row none N/A N/A
CREATE VIEW create desc, non-public publish desc no N/A
CREATE SEQUENCE create desc, non-public publish desc no N/A
CREATE TABLE, no constraints create desc, non-public publish desc no N/A
CREATE TABLE, with constraints create desc, non-public + constraints immediately active publish desc no N/A
CREATE TABLE AS (see note below) create desc, non-public + insert publish desc no N/A
CREATE INDEX (non-unique) create desc, non-public index backfill, then publish desc no yes (index active/inactive)
CREATE UNIQUE INDEX create desc, non-public index backfill, then publish desc yes (duplicate rows) yes (index active/inactive + UNIQUE constraint)
CREATE STATISTICS define a "create stats" job none no N/A
TRUNCATE TABLE mark desc as dropped, create new desc unpublish old desc, publish new desc no N/A
DROP USER delete row none N/A N/A
DROP ROLE delete row none N/A N/A
DROP DATABASE RESTRICT delete row none N/A N/A
DROP VIEW RESTRICT unpublish delete desc no N/A
DROP TABLE RESTRICT unpublish delete desc no N/A
DROP INDEX RESTRICT unpublish delete desc no N/A
DROP SEQUENCE RESTRICT unpublish delete desc no N/A
DROP VIEW CASCADE unpublish + drop dependencies delete desc + dependency async same as deps N/A
DROP TABLE CASCADE unpublish + drop dependencies delete desc + dependency async same as deps N/A
DROP INDEX CASCADE unpublish + drop dependencies delete desc + dependency async same as deps N/A
DROP SEQUENCE CASCADE unpublish + drop dependencies delete desc + dependency async same as deps N/A
DROP DATABASE CASCADE delete db row + drop dependencies dependency async same as deps N/A
ALTER TABLE RENAME update desc in-RAM publish updated desc no N/A
ALTER TABLE RENAME COLUMN update desc in-RAM publish updated desc no N/A
ALTER TABLE RENAME CONSTRAINT update desc in-RAM publish updated desc no N/A
ALTER TABLE DROP COLUMN create mutation, non-public publish mutation no N/A
ALTER TABLE DROP CONSTRAINT create mutation, non-public publish mutation no N/A
ALTER TABLE ALTER DROP DEFAULT create mutation, non-public publish mutation no N/A
ALTER TABLE ALTER DROP NOT NULL create mutation, non-public publish mutation no N/A
ALTER TABLE ALTER DROP STORED create mutation, non-public publish mutation no N/A
ALTER TABLE ALTER TYPE (no conv) create mutation, non-public publish mutation no N/A
ALTER TABLE AUDIT SET create mutation, non-public publish mutation no N/A
ALTER TABLE ADD COLUMN NULL create mutation, non-public publish mutation no N/A
ALTER TABLE ADD CONSTRAINT NOT NULL NOT VALID create non-public mutation, mark constraint as invalid publish desc no N/A
ALTER TABLE ADD CONSTRAINT DEFAULT NOT VALID create non-public mutation, mark constraint as invalid publish desc no N/A
ALTER TABLE ADD CONSTRAINT UNIQUE NOT VALID create non-public mutation, mark constraint as invalid publish desc no N/A
ALTER TABLE ADD CONSTRAINT CHECK NOT VALID create non-public mutation, mark constraint as invalid publish desc no N/A
ALTER TABLE ADD CONSTRAINT FK NOT VALID create non-public mutation, mark constraint as invalid publish desc no N/A
ALTER TABLE ADD COLUMN DEFAULT create mutation, non-public column backfill, then publish desc yes (scalar eval fails) NO (but see ALTER ADD CONSTRAINT SET DEFAULT)
ALTER TABLE ADD CONSTRAINT NOT NULL create mutation, non-public column validation, then publish desc yes (null row) yes
ALTER TABLE ADD CONSTRAINT DEFAULT create mutation, non-public column backfill, then publish desc yes (scalar eval fails) yes
ALTER TABLE ADD CONSTRAINT UNIQUE create mutation + index, non-public index backfill, then publish desc yes (dup row) yes
ALTER TABLE ADD CONSTRAINT CHECK create mutation, non-public check validation, then publish desc yes (check fails) yes
ALTER TABLE ADD CONSTRAINT FK create mutation + index, non-public index backfill + FK validation, then publish desc yes (unique error, row not found) yes
ALTER TABLE ALTER SET DEFAULT create mutation, non-public column backfill, then publish desc yes (scalar eval fails) NO (!!!)
ALTER TABLE ALTER TYPE (w/ data conv) create mutation, non-public background data conversion, then publish desc yes (conversion fails) NO (!!!)
ALTER TABLE PARTITION BY create mutation, non-public perform partitioning, then publish desc no (?) N/A (?)
ALTER TABLE VALIDATE CONSTRAINT run validation in-line none N/A N/A
ALTER TABLE SPLIT AT issue admin split command none N/A N/A
ALTER TABLE RELOCATE issue admin relocate command none N/A N/A
ALTER TABLE UNSPLIT AT issue admin unsplit command none N/A N/A
ALTER TABLE UNSPLIT ALL issue admin unsplit command none N/A N/A
ALTER TABLE SCATTER issue admin scatter command none N/A N/A
ALTER TABLE INJECT STATISTICS delete + insert row in system table none N/A N/A
ALTER TABLE CONFIGURE ZONE update row in system table none N/A N/A
ALTER SEQUENCE update desc in-RAM publish updated desc no N/A
ALTER INDEX RENAME update desc in-RAM publish updated desc no N/A
ALTER INDEX SPLIT AT issue admin split command none N/A N/A
ALTER INDEX RELOCATE issue admin relocate command none N/A N/A
ALTER INDEX UNSPLIT AT issue admin unsplit command none N/A N/A
ALTER INDEX UNSPLIT ALL issue admin unsplit command none N/A N/A
ALTER INDEX SCATTER issue admin scatter command none N/A N/A
ALTER INDEX CONFIGURE ZONE update row in system table none N/A N/A

Note about CREATE TABLE AS: this statement processes the insert
asynchronously only when executed outside an explicit txn.

@knz
Copy link
Contributor Author

knz commented Oct 31, 2019

@andy-kimball in a separate thread suggests that every "long running" DDL should be considered as split between a sync part and an async part, with the understanding that the async part may fail.

A key part of this proposal is that a client must be able to verify whether the async part has completed successfully.

When the complex DDL is related to a constraint, this is trivially possible, as both pg and crdb have a VALIDATE CONSTRAINT statement for this specific purpose.
This can be even extended to CREATE UNIQUE INDEX, which can be understood as CREATE INDEX + ADD CONSTRAINT UNIQUE.

However as per the table in my previous comment above, the following statements do not have a clear way for clients to control completion:

  • ALTER TABLE ADD COLUMN with DEFAULT, and ALTER TABLE ALTER COLUMN SET DEFAULT.
  • ALTER TABLE ALTER COLUMN SET TYPE with data conversion

@knz
Copy link
Contributor Author

knz commented Oct 31, 2019

For reference here is what pg 12 has to say about ALTER TABLE SET TYPE, ADD DEFAULT and other ALTER forms that require a table rewrite: https://www.postgresql.org/docs/12/mvcc-caveats.html

Some DDL commands, currently only TRUNCATE and the table-rewriting forms of ALTER TABLE, are not MVCC-safe. This means that after the truncation or rewrite commits, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the DDL command committed.
This will only be an issue for a transaction that did not access the table in question before the DDL command started — any transaction that has done so would hold at least an ACCESS SHARE table lock, which would block the DDL command until that transaction completes. So these commands will not cause any apparent inconsistency in the table contents for successive queries on the target table, but they could cause visible inconsistency between the contents of the target table and other tables in the database.

@knz
Copy link
Contributor Author

knz commented Oct 31, 2019

For reference here is what MSSQL has to say about online changes, including type changes and new columns: https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-table-transact-sql?view=sql-server-ver15

Online alter column allows user created and autostatistics to reference the altered column for the duration of the ALTER COLUMN operation, which allows queries to run as usual. At the end of the operation, autostats that reference the column are dropped and user-created stats are invalidated. The user must manually update user-generated statistics after the operation is completed. If the column is part of a filter expression for any statistics or indexes then you can't perform an alter column operation.
While the online alter column operation is running, all operations that could take a dependency on the column (index, views, and so on.) block or fail with an appropriate error. This behavior guarantees that online alter column won't fail because of dependencies introduced while the operation was running.

The language is a bit obfuscated but this says (in other words) that ALTER COLUMN SET TYPE really adds a new column to perform the change, then converts, then drops the original column. Until/unless the conversion completes, concurrent txns either continue to observe the original column (until the DDL-containing txn commits) or get suspended while waiting on a lock for the new column (after the DDL-containing txn logically commits).

Similarly for new columns, the new column remains invisible until the backfill completes.

(It's not clear how clients are meant to observe whether the backfill completed successfully or not.)

@knz
Copy link
Contributor Author

knz commented Oct 31, 2019

As per today's SIG notes, @lucy-zhang and @dt to review my analysis above (including the complete DDL table).

knz added a commit to knz/cockroach that referenced this issue Oct 31, 2019
CockroachDB currently allows an explicit BEGIN..COMMIT txn containing
DDL to fail atomicity in certain cases. These cases are detected
during COMMIT and reported to the client as error XXA00.

This patch extends the error message to link to issue cockroachdb#42061 which
describes and handles this situation further.

Release note (sql change): the error message generated when a
txn containing DDL is both partially committed and partially rolled back
(error XXA00) now contains a link to github issue cockroachdb#42061 where this
situation is discussed further.
knz added a commit to knz/cockroach that referenced this issue Oct 31, 2019
Previously, CockroachDB always accepted DDL inside explicit
txns (BEGIN...COMMIT) and would issue some of these DDL statements as
asynchronous jobs after the rest of the txn was committed
successfully. If that DDL subsequently failed, a client would also get
error XXA00 to signal this error. However at that point the txn also
would be both partially committed and partially rolled back. This is
in effect an atomicity violation.

It was considered to be acceptable because in practice most clients
using DDL inside BEGIN...COMMIT do not, in fact, expect ACID semantics
in those cases. So it was considered OK to not deliver ACID semantics
in certain cases.

Except that we now have clients that do want us to "do the right
thing" and never allow a situation that _could_ yield atomicity
violations.

This patch makes a step in this direction by introducing a new session
variable `strict_ddl_atomicity`, which defaults to `false` for
compatibility with existing CockroachDB 19.2 clients and that of
previous CockroachDB versions.

When this setting is enabled / set to `true`, any DDL that would
be otherwise be processed asynchronously and risk triggering error
XXA00 is now fully disallowed, with the following error:

```
pq: unimplemented: cannot run this DDL statement inside BEGIN..COMMIT as its atomicity cannot be guaranteed
HINT: You have attempted to use a feature that is not yet implemented.
See: cockroachdb#42061
--
You can set the session variable 'strict_ddl_atomicity' to false if you are willing to accept atomicity violations.
```

Other DDL statements without async processing are still allowed
without triggering that error.

Examples of rejected DDL include but are not limited to: CREATE
INDEX, ALTER ADD CONSTRAINT, ADD COLUMN with DEFAULT, DROP INDEX.

Examples of DDL that's still processed: RENAME COLUMN, RENAME TABLE,
ALTER INDEX CONFIGURE ZONE.

Release note (sql change): a SQL client can now request strict
atomicity for mixed DDL/DML transactions with the new session variable
`strict_ddl_atomicity`, which defaults to `false`. When this variable
is set to `true`, CockroachDB will refuse to accept processing those
specific DDL statements inside BEGIN...COMMIT for which it cannot
guarantee atomic processing (other DDL statements are still allowed).
@knz
Copy link
Contributor Author

knz commented Oct 31, 2019

Separately, regarding Andy's proposal to avoid an error in COMMIT even when the async DDL fails, @rafiss contributes:

it may not be clear to the user or schema migration tool that a successful COMMIT can leave the constraint in a NOT VALIDATED state.
The risk to the user is that they continue adding data to the table, and they think it satisfies the constraint, but in fact more constraint violations pile up, making it even harder to fix the data and validate the constraint later (and that's if anyone ever realizes they need to do that).
I don't really think it would be easy to clarify this with user education alone -- there are so many tools that exist already that assume a successful commit means that the constraints added in the transaction were applied and validated successfully. The fundamental issue is that, to me, using unvalidated constraints by default seems like an extremely significant departure from what users may expect, especially those expecting lots of Postgres compatibility.
Correction: Radu pointed out to me that newly inserted data will actually be validated against the constraint even when it is in the NOT VALIDATED state. But, still I have concerns about leaving the constraint NOT VALIDATED in a way that would be hard for a user to ever notice.

@knz
Copy link
Contributor Author

knz commented Oct 31, 2019

Example Go ORM affected: http://github.com/gobuffalo/pop especially https://github.com/gobuffalo/pop/blob/master/migrator.go#L93

Will yield invalid ORM migration behavior if error XXA00 occurs.

@bdarnell
Copy link
Contributor

The fundamental issue is that, to me, using unvalidated constraints by default seems like an extremely significant departure from what users may expect, especially those expecting lots of Postgres compatibility.

Yeah, this is the key, and suggests another path forward: require the NOT VALID modifier on all constraint additions in transactions, turning all such changes into explicit two-step operations to add the invalid constraint and then validate it.

@knz
Copy link
Contributor Author

knz commented Oct 31, 2019 via email

@thoszhang
Copy link
Contributor

thoszhang commented Oct 31, 2019

Are unique constraints/indexes included in the category of "constraints"? It's hard to see what the equivalent of an "unvalidated"/NOT VALID state is for a unique constraint, basically because a unique constraint is global (whereas check/FK constraints are not).

@thoszhang thoszhang reopened this Oct 31, 2019
@bdarnell
Copy link
Contributor

Are unique constraints/indexes included in the category of "constraints"? It's hard to see what the equivalent of an "unvalidated"/NOT VALID state is for a unique constraint, basically because a unique constraint is global (whereas check/FK constraints are not).

In postgres, a CREATE UNIQUE INDEX CONCURRENTLY command can leave behind an INVALID index: https://www.postgresql.org/docs/9.1/sql-createindex.html. It is forbidden to use CREATE (UNIQUE) INDEX CONCURRENTLY in an explicit transaction.

@thoszhang
Copy link
Contributor

thoszhang commented Oct 31, 2019

So an INVALID index in Postgres can either be (1) dropped and recreated, or (2) "rebuilt", during which the table is offline/locked. Since we can't support (2), what would be the advantage of us keeping an INVALID index in the logical schema at all, if all a user can do is drop it? (Also, unlike Postgres, since we presumably wouldn't support REBUILD, there would also be no point in continuing to keep the index updated on new writes. (edit: Actually, there would be a "point" in that we'd check for uniqueness violations on some subset of the rows. Is that useful?))

I guess the other question is whether we've considered supporting DDL statements (or mixed DDL/DML transactions) that do require the table to be offline, in order to solve some of these problems.

edit 2: I've been assuming throughout that unique constraints are always backed by unique indexes, and uniqueness violations manifest as key collisions in the index; if I'm wrong, please correct me.

@andy-kimball
Copy link
Contributor

I think it's still somewhat useful to keep checking for uniqueness violations on some subset. At least things are not getting worse. That said, I'm not as opinionated on what to do with the index once the backfill job has failed. More important is being very deliberate and explicit about separating long-running O(# rows) operations into a sync and an async phase, and then ensuring that the sync phase is fully transactional and predictable (and documenting that).

Unique constraints don't always need to be enforced via index key collisions. See #41535 for an example where they're not.

@knz
Copy link
Contributor Author

knz commented Oct 31, 2019

Are unique constraints/indexes included in the category of "constraints"? It's hard to see what the equivalent of an "unvalidated"/NOT VALID state is for a unique constraint, basically because a unique constraint is global (whereas check/FK constraints are not).

It seems to me that it's clear what is going on: if it's unvalidated, that means it's inactive and there can still be duplicate rows.

Also UNIQUE can be handled via the constraint system (the SQL standard and pg's dialect have all unique constraints receive a name in the constraint table, so we can attach state to it).

@thoszhang
Copy link
Contributor

Are unique constraints/indexes included in the category of "constraints"? It's hard to see what the equivalent of an "unvalidated"/NOT VALID state is for a unique constraint, basically because a unique constraint is global (whereas check/FK constraints are not).

It seems to me that it's clear what is going on: if it's unvalidated, that means it's inactive and there can still be duplicate rows.

But the guarantee provided by "unvalidated" for FK/check constraints is stronger: it means that the constraint is enforced for all writes going forward. This is not the case for unvalidated/invalid unique constraints with a partially built index, because we can't guarantee that a new row is actually globally unique in the table, even if we can guarantee that the row is unique across all rows updated since the constraint was added. This distinction seems important to me, but maybe it's unimportant for users in practice, as long as we flag the constraint as unvalidated/invalid.

That's a relatively minor point. I'm mostly wondering about how well this plan (having the transaction succeed if the synchronous portion of the schema change succeeds, and alerting failures during the async phase via another channel) will actually work for users' most common use cases, especially when they're using migration tools and ORMs. Will they need to manually monitor their migration to see if it succeeded?

This was referenced Jun 1, 2021
@jlinder jlinder added the T-sql-schema-deprecated Use T-sql-foundations instead label Jun 16, 2021
@exalate-issue-sync exalate-issue-sync bot added T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions) and removed T-sql-schema-deprecated Use T-sql-foundations instead labels May 10, 2023
craig bot pushed a commit that referenced this issue Aug 22, 2023
42063: sql: new opt-in option `strict_ddl_atomicity` r=fqazi,chrisseto,dikshant a=knz

Informs #42061.
informs #87236
Fixes #108737.

Previously, CockroachDB always accepted DDL inside explicit
txns (BEGIN...COMMIT) and would issue some of these DDL statements as
asynchronous jobs after the rest of the txn was committed
successfully. If that DDL subsequently failed, a client would also get
error `XXA00` to signal this error. However at that point the txn also
would be both partially committed and partially rolled back. This is
in effect an atomicity violation.

It was considered to be acceptable because in practice most clients
using DDL inside BEGIN...COMMIT do not, in fact, expect ACID semantics
in those cases. So it was considered OK to not deliver ACID semantics
in certain cases.

Except that we now have clients that do want us to "do the right
thing" and never allow a situation that _could_ yield atomicity
violations.

This patch makes a step in this direction by introducing a new session
variable `strict_ddl_atomicity`, which defaults to `false` for
compatibility with existing CockroachDB 23.1 clients and that of
previous CockroachDB versions.

When this setting is enabled / set to `true`, any DDL that would
be otherwise be processed asynchronously and risk triggering error
`XXA00` is now fully disallowed, with the following error:

```
pq: unimplemented: cannot run this DDL statement inside BEGIN..COMMIT as its atomicity cannot be guaranteed
HINT: You have attempted to use a feature that is not yet implemented.
See: https://go.crdb.dev/issue-v/42061
--
You can set the session variable 'strict_ddl_atomicity' to false if you are willing to accept atomicity violations.
```

Other DDL statements without async processing are still allowed
without triggering that error.

Examples of rejected DDL include but are not limited to: CREATE
INDEX, ALTER ADD CONSTRAINT, ADD COLUMN with DEFAULT, DROP INDEX.

Examples of DDL that's still processed: RENAME COLUMN, RENAME TABLE,
ALTER INDEX CONFIGURE ZONE.

Release note (sql change): a SQL client can now request strict
atomicity for mixed DDL/DML transactions with the new session variable
`strict_ddl_atomicity`, which defaults to `false`. When this variable
is set to `true`, CockroachDB will refuse to accept processing those
specific DDL statements inside BEGIN...COMMIT for which it cannot
guarantee atomic processing (other DDL statements are still allowed).

Note that schema changes implicit in certain operations (e.g. IMPORT)
are not protected via the new mechanism and can still fail with
`XXA00` errors.

Epic: CRDB-28893

Co-authored-by: Raphael 'kena' Poss <knz@thaumogen.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-schema-changes A-schema-descriptors Relating to SQL table/db descriptor handling. C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. S-3-ux-surprise Issue leaves users wondering whether CRDB is behaving properly. Likely to hurt reputation/adoption. T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions) X-anchored-telemetry The issue number is anchored by telemetry references.
Projects
None yet
Development

No branches or pull requests

7 participants