{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":79285486,"defaultBranch":"main","name":"OnlineSchemaChange","ownerLogin":"facebookincubator","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2017-01-18T00:06:19.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/19538647?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1632333846.175013","currentOid":""},"activityList":{"items":[{"before":"e4ab68f5b9648811d3696d0019fa1b211778d4d3","after":"ea181ca824b40b4dd3be02537917d53c395d6645","ref":"refs/heads/main","pushedAt":"2024-09-17T23:56:30.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Making OSC table create timestamp change retryable\n\nSummary:\nIn D56985779 we added a new failure model to OSC to fail a schema change if the table create timestamp is modified in the middle of the workflow. This is intended to detect a race condition in which OSC can potentially coincide with a user initiated change to the table schema. This caused failures in production for payments usecase which we could not triage.\n\nThis change does two things: (1) it improves the logging of the error condition so that we can additionally see what the observed timestamp is and (2) modifies the error classification from non-retryable to retryable. We observed that the timestamp change failure happens sporadically. Hence it makes sense to make it retryable in favor of increasing experienced success rate in the AOSC pipeline.\n\nDifferential Revision: D62893842\n\nfbshipit-source-id: c9a8d5c66b77fbed8c1b4ec1a7d9373f23a19a15","shortMessageHtmlLink":"Making OSC table create timestamp change retryable"}},{"before":"28394239ec86ef62af6dbda4e30f9d7e14aff723","after":"e4ab68f5b9648811d3696d0019fa1b211778d4d3","ref":"refs/heads/main","pushedAt":"2024-09-04T14:32:55.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"co-osc: native checksum\n\nSummary:\nThis diff integrates native checksum into co-osc using the server extensions added in D56383480.\n\nThe basic outline of how it works is similar to SQL-based checksum in co-osc.\n\nMost of the fun happens on the primary. We:\n1. Do a catchup and ensure all secondaries catch up using the same \"marker\" technique SQL-based checksum uses (drop a table on the primary and wait for the drop to replicate on all secondaries).\n1. Establish a fresh (TCP) connection to all replicas, including ourselves, and issue a `CHECKSUM TABLE` statement against the new table.\n1. While that is running, issue a `CHECKSUM TABLE` statement against the old table using the default (UNIX, local) connection.\n1. Wait for all results to come back.\n1. Compare the old table checksum against the new table checksums.\n\nAlso refactored the Warm Storage and Co-OSC checks and added unit tests to harden the contract.\n\nReviewed By: preritj24\n\nDifferential Revision: D61173262\n\nfbshipit-source-id: 449e84a7ffd3c8576ccea65fe3d34a2b166c31dc","shortMessageHtmlLink":"co-osc: native checksum"}},{"before":"4af97af0e33576fc21222a838bf2281f51209566","after":"28394239ec86ef62af6dbda4e30f9d7e14aff723","ref":"refs/heads/main","pushedAt":"2024-09-03T21:12:03.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Skip creating triggers and delta table if catchup flag is on and call catchup tool during replay\n\nSummary:\n- Skip to create triggers and delta table if the flag is on\n- Call catchup tool during replay\n\nReviewed By: preritj24, alexbudfb\n\nDifferential Revision: D61956151\n\nfbshipit-source-id: da7d28519765c133bf547262ce8674bcce42afe0","shortMessageHtmlLink":"Skip creating triggers and delta table if catchup flag is on and call…"}},{"before":"b3ce2f5f6fbd604ff3363254a17429c229e61380","after":"4af97af0e33576fc21222a838bf2281f51209566","ref":"refs/heads/main","pushedAt":"2024-08-29T14:32:08.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add missing Pyre mode headers] [batch:4/25] [shard:2/N]\n\nDifferential Revision: D61964303\n\nfbshipit-source-id: 253cde64d8b8e6914a682ded65162ad99b622715","shortMessageHtmlLink":"Add missing Pyre mode headers] [batch:4/25] [shard:2/N]"}},{"before":"b1dd011f2713f287a9e10e866db2b6ded5dc6035","after":"b3ce2f5f6fbd604ff3363254a17429c229e61380","ref":"refs/heads/main","pushedAt":"2024-08-29T04:11:08.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Adding method to enable fast_catchup on Co-OSC\n\nSummary: We are gonna enable fast_catchup with GTID on CO-OSC first before OSC, so this is turned off by default at the moment in OSC\n\nDifferential Revision: D61926406\n\nfbshipit-source-id: 11e631089bf0e997e89aae382a43ea5b7ab3c235","shortMessageHtmlLink":"Adding method to enable fast_catchup on Co-OSC"}},{"before":"6ac4fc9db4938465f821f4a6cce0c4173973955b","after":"b1dd011f2713f287a9e10e866db2b6ded5dc6035","ref":"refs/heads/main","pushedAt":"2024-08-29T03:48:52.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Adding OSC Catchup job skeleton to call faster catchup\n\nSummary: This would be the wrapper to boostrap/keep track of the status of faster catchup tool. This class would be used under a flag so won't affect prod\n\nReviewed By: preritj24\n\nDifferential Revision: D61925348\n\nfbshipit-source-id: 25c006ad74a041e1786b965f5ddbe5c5384df4c4","shortMessageHtmlLink":"Adding OSC Catchup job skeleton to call faster catchup"}},{"before":"3c0d3e5c51b67343ad58a2054e98e3304b35b59c","after":"6ac4fc9db4938465f821f4a6cce0c4173973955b","ref":"refs/heads/main","pushedAt":"2024-08-28T19:15:39.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Capturing GTID during snapshot\n\nSummary: Extending OSC code to understand GTID during snapshot\n\nReviewed By: preritj24\n\nDifferential Revision: D61921955\n\nfbshipit-source-id: 45ce06e83e1a83168a4aa24246c3ba14167a5f3a","shortMessageHtmlLink":"Capturing GTID during snapshot"}},{"before":"5c323382a8ba276c9a31bb5f6a52c76c63c27e77","after":"3c0d3e5c51b67343ad58a2054e98e3304b35b59c","ref":"refs/heads/main","pushedAt":"2024-08-27T22:02:42.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add support for column expression and virtual_or_stored\n\nSummary:\nDetect column expressions and virtual_or_stored from the raw schema. Those will be used for the new linters to block customers creating columns with expressions and virtual indexes.\n\nRemove test_parser_on_real_config because there are too many differences btwn the old and new parser.\n\nReviewed By: bladepan\n\nDifferential Revision: D61511037\n\nfbshipit-source-id: 5aee3ab28730b4e0c1e2244aadf3d2f1a18edfa6","shortMessageHtmlLink":"Add support for column expression and virtual_or_stored"}},{"before":"e37122c791e2303a9dd858639ae6604063e721c3","after":"5c323382a8ba276c9a31bb5f6a52c76c63c27e77","ref":"refs/heads/main","pushedAt":"2024-08-21T18:05:15.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Checking that table schema is unchanged before loading data\n\nSummary:\nDuring S240364 it was observed that a DDL statement like `TRUNCATE TABLE` which is not captured by OSC triggers can be executed against the original table while the schema migration is in process. This can lead to an inconsistency between the original and new tables as the missing DDL statements are not played against the new table.\n\nTo protect against such unwanted outcomes this change is recording the original table timestamp from `information_schema` before snapshot generation and compares it to one captured right before load. If the two do not match the workflow is aborted with the error code 155.\n\nMySQL does not update table create timestamp by default upon a truncation. To help with detection of truncation statement runs we have added a new global variable (`update_table_create_timestamp_on_truncate`) that is turned on to instruct MySQL to update the table create timestamp every time it is truncated.\n\nTo disassociate OSC rollout from production rollout of the new global variable a check is added to the OSC so that it uses the variable only if it is defined.\n\nReviewed By: preritj24\n\nDifferential Revision: D56985779\n\nfbshipit-source-id: 1cfcfd1311c6a61b9cbbc124c3c1e53688352a54","shortMessageHtmlLink":"Checking that table schema is unchanged before loading data"}},{"before":"4bd070c4b7d108174c7b5451f963984c82f11f09","after":"e37122c791e2303a9dd858639ae6604063e721c3","ref":"refs/heads/main","pushedAt":"2024-08-19T16:57:07.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Enable rocksdb_bulk_load_enable_unique_key_check in OSC\n\nSummary:\nWhen adding unique key in OSC, it will disable bulk load due to missing unique check(in T177628948). That caused AOSC T198047535 has no chance of succeeding due to it trying to add a unique key which results in bulk load disable. It's a 200GB MyRocks table and is going at an excruciatingly slow pace.\nNow we support unique key check for bulk loading (D53740420) and can enable bulk_load_unique_key in OSC.\n\nrefactor should_disable_bulk_load to make the logic more clear\n\nReviewed By: preritj24\n\nDifferential Revision: D61310126\n\nfbshipit-source-id: 28e4f8a54539e0473f126efbba14907917a0ede6","shortMessageHtmlLink":"Enable rocksdb_bulk_load_enable_unique_key_check in OSC"}},{"before":"09350ca992a80544ebfc725e42f72a9e1782199e","after":"4bd070c4b7d108174c7b5451f963984c82f11f09","ref":"refs/heads/main","pushedAt":"2024-08-12T19:42:23.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Support invisible columns in the new AOSC parser\n\nSummary:\ninvisible column is a new mysql 8.0 feature and more and more internal customers start using that.\n\nLack of invisible column support also caused exceptions in aosc_local_checksum, which also blocked the fb_dba_osc rpm to release.\n\n```\n[xdb0301.cln6.facebook.com] [Thu Aug 08 21:43:35 2024] [aosc_local_checksum] osc.lib.sqlparse.create.ParseError: Unsupported column attribute VISIBILITY_COLUMN_ATTR\n(argv: ['/usr/local/bin/aosc_local_checksum'])\n(cwd: '/')\n(unixname: 'root')\n\nTraceback (most recent call last):\n File \"dba/osc/core/lib/ast_parser/fb_ast_parser_lib.pyx\", line 237, in dba.osc.core.lib.ast_parser.fb_ast_parser_lib.AstCreateParser.decode_col_attr\nosc.lib.sqlparse.create.ParseError: Unsupported column attribute VISIBILITY_COLUMN_ATTR\n```\n\nReviewed By: preritj24\n\nDifferential Revision: D61033775\n\nfbshipit-source-id: 85b566bada45d2db03f2a0ae88ef3ce7989d321c","shortMessageHtmlLink":"Support invisible columns in the new AOSC parser"}},{"before":"0289c048fce481dc390e38ce95eb5d267a9858e2","after":"09350ca992a80544ebfc725e42f72a9e1782199e","ref":"refs/heads/main","pushedAt":"2024-08-07T17:15:07.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"osc: support column reordering\n\nSummary:\nD60608640 adds support in the server to honor the column list order given to the `CHECKSUM TABLE` statement.\n\nThis change integrates it into OSC and teaches OSC to sort the `self.old_column_list` before sending it to the server to support column reorderings in the after table. E.g. if a table had columns (a, b, c) and an AOSC changed them to (a, c, b). Without this change and D60608640, `CHECKSUM TABLE` would process the columns in their defined order, resulting in different checksums. That would prevent us from using the faster native checksum for such AOSCs, and we'd have to special case this detection.\n\nNow we sort the columns lexicographically when generating the SQL statement, so we send (a, b, c) in both cases to the server, and MySQL (with D60608640), respects this order and processes the columns in the order given, not the table definition's.\n\nWith these two diffs, we can handle such cases without a fallback, allowing the optimization to have more coverage.\n\nDifferential Revision: D60616829\n\nfbshipit-source-id: 7e24a8ef269412a69db0f3dc4f12ae28cee3f71c","shortMessageHtmlLink":"osc: support column reordering"}},{"before":"3abdf820daa085388192ac5c832138e6db65569f","after":"0289c048fce481dc390e38ce95eb5d267a9858e2","ref":"refs/heads/main","pushedAt":"2024-08-06T20:35:58.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"osc: version refactor\n\nSummary:\nThis change refactors the MySQL version parsing code to support FB-infra MySQL version strings better. In particular, it queries `@version_comment` now instead of `@version`, which has richer information such as the release version.\n\nThis allows us to make finer grained decisions about whether OSC can use a certain capability or not, such as checksum extensions or `DUMP TABLE`.\n\nThere is already a precedent for this, with things like `is_high_pri_ddl_supported` and `is_trigger_rbr_safe`.\n\nReviewed By: preritj24\n\nDifferential Revision: D60781614\n\nfbshipit-source-id: 8dc062d18630494db4b59da27df56f5f8b69d9d5","shortMessageHtmlLink":"osc: version refactor"}},{"before":"d55f79ff7c74c67764630a578c49bd5669904cbc","after":"3abdf820daa085388192ac5c832138e6db65569f","ref":"refs/heads/main","pushedAt":"2024-07-31T20:41:33.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"osc: native checksum\n\nSummary:\nThis diff adds native checksum support to the full table checksum stage in OSC (AOSC integration will be separate).\n\nFull table checksum takes ~40% of the time in OSC.\n\nMore context:\n1. https://fb.workplace.com/groups/dbeng.oncall/permalink/26271987645756502/\n2. https://fb.workplace.com/groups/fb.database.management/permalink/1634178790698816/\n\nSince AOSC changes the schema, the plain `CHECKSUM TABLE` statement cannot be used as is since it applies to all columns in the table: we need to compare the columns apples-to-apples. For that, D56383480 extended the command in the server to allow specific column selection, which also comes in handy for skipping non-deterministic columns like FLOAT (though for now AOSC does not perform such filtering, unlike OLM).\n\nReviewed By: ankurrastogi09\n\nDifferential Revision: D60481371\n\nfbshipit-source-id: bd7837475c45f5e643bb902ed77da527f0bba24a","shortMessageHtmlLink":"osc: native checksum"}},{"before":"a9e92f01f07afc660f01a4893c7789fa62be567f","after":"d55f79ff7c74c67764630a578c49bd5669904cbc","ref":"refs/heads/main","pushedAt":"2024-07-09T14:10:17.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"osc: type hints and comments\n\nSummary:\nAs I study the checksum code in [co-]OSC, I'm annotating the code with py type hints for readability and navigability in VS code.\n\nAlso adding comments for clarity, and some TODOs for questionable practices or improvements.\n\nReviewed By: preritj24\n\nDifferential Revision: D59478466\n\nfbshipit-source-id: 7faad95797733c2029b284bfc3073a3ee923375f","shortMessageHtmlLink":"osc: type hints and comments"}},{"before":"92d284f232c2cd2fb8e40779d8bfcbbde89e5c09","after":"a9e92f01f07afc660f01a4893c7789fa62be567f","ref":"refs/heads/main","pushedAt":"2024-06-27T21:51:07.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"osc: Default Parallel Table Dump to 4 threads\n\nSummary:\nAdd a JK to control the number of threads OSC uses when `DUMP TABLE` is enabled.\nAlso log the number of threads to scuba.\n\nReviewed By: preritj24\n\nDifferential Revision: D59120065\n\nfbshipit-source-id: df35ae06cc94d5cdd40f57162c84e60875dfcb31","shortMessageHtmlLink":"osc: Default Parallel Table Dump to 4 threads"}},{"before":"909cd3de7e689f7af23a8dd8cfac912ec91eb596","after":"92d284f232c2cd2fb8e40779d8bfcbbde89e5c09","ref":"refs/heads/main","pushedAt":"2024-06-26T18:26:03.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Sprinkle more gc collection into OSC\n\nSummary: We want OSC's memory to remain under 1GB usage(ideally, close to 500MB) throughout the OSC process. There is really no reason for it to be so high when all the major work is being done in the server!\n\nDifferential Revision: D59062312\n\nfbshipit-source-id: 2ffbfaac9c206503d32ac92b22d4d965c9cd1e23","shortMessageHtmlLink":"Sprinkle more gc collection into OSC"}},{"before":"67196dcbf540051952b2093496a8cdbafd1b7b6e","after":"909cd3de7e689f7af23a8dd8cfac912ec91eb596","ref":"refs/heads/main","pushedAt":"2024-06-18T17:52:20.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fixes to exclude same row updates in one batch\n\nSummary: When we are creating a batch, need to peek into the future incumbents to determine if the batch needs to be terminated when there is an upcoming update.\n\nDifferential Revision: D58694950\n\nfbshipit-source-id: 8ccd18cc3ef8bad4a7b6f69f04b9a575268cd391","shortMessageHtmlLink":"Fixes to exclude same row updates in one batch"}},{"before":"49d79bc3c39b24642414f9a17d8df8427dcc455b","after":"67196dcbf540051952b2093496a8cdbafd1b7b6e","ref":"refs/heads/main","pushedAt":"2024-06-17T18:43:26.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"OSC: add extra tracing\n\nSummary: Adding a bunch of tracing to OSC to better diagnose performance issues.\n\nReviewed By: ankurrastogi09\n\nDifferential Revision: D58619755\n\nfbshipit-source-id: 49c846857498a6423e3d439dff64567118941298","shortMessageHtmlLink":"OSC: add extra tracing"}},{"before":"acd6a524942626bcced82d826686a031e1a58a90","after":"49d79bc3c39b24642414f9a17d8df8427dcc455b","ref":"refs/heads/main","pushedAt":"2024-06-17T18:35:03.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Batch updates on unrelated consecutive primary keys in the delta table\n\nSummary:\nOSC has always had issues catching up to high incoming write traffic especially, when there are large # of updates.\n\nSome recent samples:\nhttps://fburl.com/scuba/aosc_v2/38bq050l\n\nneeding DMS oncall support and handholding.\n\nOn closer inspection, it seems we can do better. When the consecutive updates are on unrelated primary keys, there is no reason to serialize these updates. Also, it is unusual to keep on updating the same key at the rate of 2000 updates/second. Usual pattern is to update different keys.\n\nDifferential Revision: D58637829\n\nfbshipit-source-id: 0cb4baa29ec5294a0398dd8df8686166d63b1399","shortMessageHtmlLink":"Batch updates on unrelated consecutive primary keys in the delta table"}},{"before":"41d296c676682e2aed045341139e8f61d8b23bed","after":"acd6a524942626bcced82d826686a031e1a58a90","ref":"refs/heads/main","pushedAt":"2024-06-14T00:19:10.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Ignore engine change in dry run output\n\nSummary:\nThis ask came from bizdb, who use dryrun as a way to detect any pending schema changes automatically. Since they are going through a myrocks migration, this causes false positives.\n\nThis is really a patch since we want to do a more comprehensive change which doesn't care for engines. An alternative is to clear the engine field in each schema but we will see how the user feedback is.\n\nDifferential Revision: D58540076\n\nfbshipit-source-id: 546bc99a4bda85b453e6cebdd5d12e84b52c7d53","shortMessageHtmlLink":"Ignore engine change in dry run output"}},{"before":"6cfef3ea1ebab3f7e64a21f84585546c17af75ee","after":"41d296c676682e2aed045341139e8f61d8b23bed","ref":"refs/heads/main","pushedAt":"2024-06-11T02:46:19.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Perform gc collection less aggressively\n\nSummary:\nGC collect calls are expensive and calling it after each replay/checksum chunk calculation can incur significant overhead in OSC process. Also, the logs should only be littered with useful info.\n\nAlso, realized memory chunk gc collection was not enabled for some aspects of co-osc causing occasional memory spikes. As we want to move towards having more OSCs per host, it is important to control the memory usage per run.\n\nReviewed By: alexbudfb\n\nDifferential Revision: D58339379\n\nfbshipit-source-id: 4f4269a4df616ecc24a466b525749883c87c9a2c","shortMessageHtmlLink":"Perform gc collection less aggressively"}},{"before":"6bae79a45bfe4c8789ab78c71c9951005b618efa","after":"6cfef3ea1ebab3f7e64a21f84585546c17af75ee","ref":"refs/heads/main","pushedAt":"2024-05-31T14:28:08.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"aosc: integrate DUMP TABLE\n\nSummary:\nIntegrate `DUMP TABLE` statement into AOSC.\n\nFor now, this is only triggered by an explicit flag in the `aosc` CLI, which will get propagated through the `AllowRequest` to the AOSC Server, which will in turn spawn a CWS workflow for Schema Change passing along the flag through to `dbexecd_osc_wrapper` via a command line arg. The flag will then get picked up by the OSC `Copy`/`CopyV2` command / payload.\n\nIn the future there will be a JK to control rollout.\n\nThings remaining:\n1. Warm storage support\n2. Drop column support (depends on D55775540)\n3. Lots of E2E testing\n\nReviewed By: preritj24\n\nDifferential Revision: D56306561\n\nfbshipit-source-id: 85871992e23ad8515e9171d7f66d6a06b154fcfa","shortMessageHtmlLink":"aosc: integrate DUMP TABLE"}},{"before":"f9b320ce40d8b2f2364987b4f99e51611a1fd2ce","after":"6bae79a45bfe4c8789ab78c71c9951005b618efa","ref":"refs/heads/main","pushedAt":"2024-05-24T19:16:25.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add some dumps on mismatched co-osc checksums\n\nSummary:\nWe sprinkle some replication lag checks in co-osc to verify that instances are not totally backed up. This works fine for most usecases but occasionally, there are some heavy write customers, for which replication lag can be in a few seconds. Here, we will have a false positive in checksum calculations if we check for 240seconds lag and not bother about intermittent lag in seconds.\n\nTo detect such cases in general as well as any anomalies around replication, add a way to dump the checksums.\n\nAnother fix clubbed in is to clear the state of gap ids(last_replayed_id) from previous loop in a batch OSC, because that can cause us to skip tracking some gap ids in the current run, causing false positives.\n\nReviewed By: sveiss\n\nDifferential Revision: D57677277\n\nfbshipit-source-id: 6355dc4fd6e07ae5077bfc92ca637f46bdd880f4","shortMessageHtmlLink":"Add some dumps on mismatched co-osc checksums"}},{"before":"67f41c83d186601b36ca920f8047cc77ba86cb0c","after":"f9b320ce40d8b2f2364987b4f99e51611a1fd2ce","ref":"refs/heads/main","pushedAt":"2024-05-20T16:56:53.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Catagorize OSC errors with internal and production issues\n\nSummary:\nWe had some bulk loading issues on cold_storage OSC and we failed detect it before users reported.\n\nOSC can be affected by mysql internal bugs or serious problems on MySQL side (osc payload, WS, server core, replication, etc). And we should audit them and trigger alerts.\n\nReviewed By: preritj24\n\nDifferential Revision: D57239397\n\nfbshipit-source-id: e2ceadd55742f97b2c8af0791a31522a7b288fa3","shortMessageHtmlLink":"Catagorize OSC errors with internal and production issues"}},{"before":"99636eedc29b48f3f74afbca37abb7de9ef70b0d","after":"67f41c83d186601b36ca920f8047cc77ba86cb0c","ref":"refs/heads/main","pushedAt":"2024-05-08T21:44:11.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"support vector dimension column attribute\n\nSummary: add support for vector dimension vector column\n\nReviewed By: preritj24\n\nDifferential Revision: D57070213\n\nfbshipit-source-id: fccba658094f914d11ed571c3a67dc4082b8ec30","shortMessageHtmlLink":"support vector dimension column attribute"}},{"before":"0e1448cf5a9f24ca3e921c86e29ccedf864f8d76","after":"99636eedc29b48f3f74afbca37abb7de9ef70b0d","ref":"refs/heads/main","pushedAt":"2024-05-08T18:22:36.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Terminate some AOSCs quicker\n\nSummary:\nWhen a user AOSC hits a 'lack of default value' or 'duplicate keys', due to bad schema change request, we should not be retrying under the guise of generic_mysql_error. These are genuine errors, which are not going to be fixed on their own and require user action. There may be similar such errors we can add over here. Ideally, filtering based on error codes is better(which we are working on as part of the UI work).\n\n====\n\nErrors for reference =>\n\nError during stage \"running DDL on db 'unified_activity_codes'\": [1062] Duplicate entry '61557311291800-1713296581' for key '__osc_new_uac_extension_log.unique_log'.\n\nError during stage \"running DDL on db 'unified_activity_codes'\": [1364] Field 'io_id' doesn't have a default value.\n\n#\n1265: DATA TRUNCATED FOR COLUMN {COLUMN NAME}\n\n3780: REFERENCING COLUMN A AND REFERENCED COLUMN B IN FOREIGN KEY\n\nReviewed By: sveiss\n\nDifferential Revision: D57107532\n\nfbshipit-source-id: f1e2285c96c486c958db78b8474478ded24e44c8","shortMessageHtmlLink":"Terminate some AOSCs quicker"}},{"before":"16921119466d0efb73c739c38260dea5eeb2feca","after":"0e1448cf5a9f24ca3e921c86e29ccedf864f8d76","ref":"refs/heads/main","pushedAt":"2024-04-16T19:12:56.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Allow ast parser to all OSC modes\n\nSummary: We are re-enabling direct mode direct, so enable all OSC mode as well.\n\nReviewed By: preritj24\n\nDifferential Revision: D56197017\n\nfbshipit-source-id: fd6b69ffaf92914e39d4c1baf6be1fbdd731a2cd","shortMessageHtmlLink":"Allow ast parser to all OSC modes"}},{"before":"296367e687cb94dc48039cbbd1a807e7b4b14a8e","after":"16921119466d0efb73c739c38260dea5eeb2feca","ref":"refs/heads/main","pushedAt":"2024-04-11T19:10:04.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Support bulk load for PK collation changes\n\nSummary:\nIn MyRocks, there is an option to support bulk loading even when the primary key charset/collation changes, by using: bulk_load_unsorted. It was previously disabled since the order of primary key changes and caused checksum mismatches.\n\nNow, with this enabled, it should ideally speed up the AOSC like: T184614241.\n\nDifferential Revision: D55954516\n\nfbshipit-source-id: 32c5751280d5f77c52995811a034c2648799ecf0","shortMessageHtmlLink":"Support bulk load for PK collation changes"}},{"before":"d7e019b083dd24ac971bb8df3a982815800dce89","after":"296367e687cb94dc48039cbbd1a807e7b4b14a8e","ref":"refs/heads/main","pushedAt":"2024-04-05T19:45:17.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"buckification\n\nSummary:\nThis commit was generated using `mgt import`.\nbuckification for third-party libraries:\nthird-party/pypi/pyparsing/3.1.2\n\nuuid_282ce83c47c744cab5d8c80bb57d4bfa\n\nReviewed By: itamaro\n\nDifferential Revision: D54806457\n\nfbshipit-source-id: a0d8408c67323e56703c4e44d240527186d591e2","shortMessageHtmlLink":"buckification"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"startCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0xN1QyMzo1NjozMC4wMDAwMDBazwAAAAS5FfBQ","endCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wNC0wNVQxOTo0NToxNy4wMDAwMDBazwAAAAQpNhNl"}},"title":"Activity · facebookincubator/OnlineSchemaChange"}