-
-- Using [`SESSION_USER`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#special-syntax-forms) in a projection or `WHERE` clause now returns the `SESSION_USER` instead of the `CURRENT_USER`. For backward compatibility, use [`session_user()`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#system-info-functions) for `SESSION_USER` and `current_user()` for `CURRENT_USER`. [#70444][#70444]
-- Placeholder values (e.g., `$1`) can no longer be used for role names in [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v22.1/alter-role) statements or for role names in [`CREATE ROLE`](https://www.cockroachlabs.com/docs/v22.1/create-role)/[`DROP ROLE`](https://www.cockroachlabs.com/docs/v22.1/drop-role) statements. [#71498][#71498]
-
-
Security updates
-
-- Authenticated HTTP requests to nodes can now contain additional cookies with the same name as the one CockroachDB uses ("session"). The HTTP spec permits duplicates and will now attempt to parse all cookies with a matching name before giving up. This can resolve issues with running other services on the same domain as your CockroachDB nodes. [#70792][#70792]
-- Added a new flag `--external-io-enable-non-admin-implicit-access` that can remove the `admin`-only restriction on interacting with arbitrary network endpoints and using `implicit` auth in operations such as [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup), [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import), or [`EXPORT`](https://www.cockroachlabs.com/docs/v22.1/export). [#71594][#71594]
-- When configuring passwords for SQL users, if the client presents the password in cleartext via `ALTER`/`CREATE USER`/`ROLE WITH PASSWORD`, CockroachDB is responsible for hashing this password before storing it. By default, this hashing uses CockroachDB's bespoke `crdb-bcrypt` algorithm, which is based on the standard [bcrypt algorithm](https://wikipedia.org/wiki/Bcrypt). The cost of this hashing function is now configurable via the new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `server.user_login.password_hashes.default_cost.crdb_bcrypt`. Its default value is `10`, which corresponds to an approximate password check latency of 50-100ms on modern hardware. This value should be increased over time to reflect improvements to CPU performance: the latency should not become so small that it becomes feasible to brute-force passwords via repeated login attempts. Future versions of CockroachDB will likely update the default accordingly. [#74582][#74582]
-
-
General changes
-
-- Non-cancelable [jobs](https://www.cockroachlabs.com/docs/v22.1/show-jobs) now do not fail unless they fail with a permanent error. They retry with exponential backoff if they fail due to a transient error. Furthermore, jobs that perform reverting tasks do not fail. Instead, they are retried with exponential backoff if an error is encountered while reverting. As a result, transient errors do not impact jobs that are reverting. [#69300][#69300]
-- CockroachDB now supports exporting operation [traces](https://www.cockroachlabs.com/docs/v22.1/show-trace) to [OpenTelemetry](https://opentelemetry.io/)-compatible tools using the OTLP protocol through the `trace.opentelemetry.collector` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). [#65599][#65599]
-- CockroachDB now supports exporting traces to a [Jaeger](https://www.jaegertracing.io/) agent through the new `trace.jaeger.agent` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). Exporting to Jaeger was previously possible by configuring the Jaeger agent to accept Zipkin traces and using the `trace.zipkin.collector ` cluster setting; this configuration is no longer required. [#65599][#65599]
-- Support for exporting to Datadog and Lightstep through other interfaces has been retired; these tools can use OpenTelemetry data. The [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `trace.lightstep.token`, `trace.datadog.agent`, and `trace.datadog.project` have been deprecated; they no longer have any effect. [#65599][#65599]
-- Tracing transaction commits now includes details about replication. [#72738][#72738]
-
-
Enterprise edition changes
-
-- Updated retryable error warning message to begin with `"WARNING"`. [#70226][#70226]
-- Temporary tables are now [restored](https://www.cockroachlabs.com/docs/v22.1/restore) to their original database instead of to `defaultdb` during a [full cluster restore](https://www.cockroachlabs.com/docs/v22.1/restore#full-cluster). Furthermore, `defaultdb` and `postgres` are dropped before a full cluster restore and will only be restored if they are present in the [backup](https://www.cockroachlabs.com/docs/v22.1/backup) being restored. [#71890][#71890]
-- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks) now support [GCP Pub/Sub](https://cloud.google.com/pubsub) as a sink. [#72056][#72056]
-
-
SQL language changes
-
-- Added new job control statements allowing an operator to manipulate all jobs of a specific type: ` ALL JOBS`. This is supported in [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/create-changefeed), [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup), [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import), and [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) jobs. For example: `PAUSE ALL CHANGEFEED JOBS`. [#69314][#69314]
-- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze) now contains more information about the [MVCC](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#mvcc) behavior of operators that scan data from disk. [#64503][#64503]
-- Added support for SQL arrays containing JSON for in-memory processing. This does not add support for storing SQL arrays of JSON in tables. [#70041][#70041]
-- Placeholder values can now be used as the right-hand operand of the `JSONFetchVal (->)` and `JSONFetchText (->>)` [operators](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#supported-operations) without ambiguity. This argument will be given the text type and the "object field lookup" variant of the operator will be used. [#70066][#70066]
-- Fixed `createdb` and `settings` columns for [`pg_catalog` tables](https://www.cockroachlabs.com/docs/v22.1/pg-catalog#data-exposed-by-pg_catalog): `pg_user`, `pg_roles`, and `pg_authid`. [#69609][#69609]
-- The `information_schema._pg_truetypid`, `information_schema._pg_truetypmod`, and `information_schema._pg_char_max_length` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) are now supported to improve compatibility with PostgreSQL. [#69913][#69913]
-- The `pg_my_temp_schema` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) now properly returns the OID of the active session's temporary schema, if one exists. [#69909][#69909]
-- The `pg_is_other_temp_schema` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) is now supported, which returns whether the given OID is the OID of another session's temporary schema. [#69909][#69909]
-- The `information_schema._pg_index_position` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) is now supported, which improves compatibility with PostgreSQL. [#69911][#69911]
-- Extended index scan hints to allow [zigzag joins](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#zigzag-joins) to be forced. [#67737][#67737]
-- `pg_authid.rolesuper`, `pg_roles.rolesuper`, and `pg_user.usesuper` are now true for users/roles that have `admin` role. [#69981][#69981]
-- Added a warning that [sequences](https://www.cockroachlabs.com/docs/v22.1/create-sequence) are slower than using [`UUID`](https://www.cockroachlabs.com/docs/v22.1/uuid). [#68964][#68964]
-- SQL queries with [`ORDER BY x LIMIT k`](https://www.cockroachlabs.com/docs/v22.1/order-by) clauses may now be transformed to use TopK sort in the query plan if the limit is a constant. Although this affects the output of [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain), using TopK in the query plan does not necessarily mean that it is used during execution. [#68140][#68140]
-- The `has_tablespace_privilege`, `has_server_privilege`, and `has_foreign_data_wrapper_privilege` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) now return [`NULL`](https://www.cockroachlabs.com/docs/v22.1/null-handling) instead of `true` when provided with a non-existed OID reference. This matches the behavior of newer PostgreSQL versions. [#69939][#69939]
-- The `pg_has_role` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) is now supported, which returns whether a given user has privileges for a specified role or not. [#69939][#69939]
-- Added the `json_populate_record`, `jsonb_populate_record`, `json_populate_recordset`, and `jsonb_populate_recordset` functions, which transform JSON into row tuples based on the labels in a record type. [#70115][#70115]
-- The `enable_drop_enum_value` [session variable](https://www.cockroachlabs.com/docs/v22.1/set-vars#supported-variables) has been removed, along with the corresponding [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). The functionality of being able to drop `enum` values is now enabled automatically. Queries that refer to the session/cluster setting will still work but will have no effect. [#70369][#70369]
-- The array [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) (`array_agg`, `array_cat`, `array_position`, etc.) now operate on record types. [#70332][#70332]
-- When an invalid cast to OID is made, a `pgerror` now returns with code `22P02`. This previously threw an assertion error. [#70454][#70454]
-- Added the `new_db_name` option to the [`RESTORE DATABASE`](https://www.cockroachlabs.com/docs/v22.1/restore#databases) statement, allowing the user to rename the database they intend to restore. [#70222][#70222]
-- Fixed error messaging for [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) for sequences. Example: `SELECT nextval('@#%@!324234')` correctly returns `relation "@#%@!324234" does not exist` (if the relation doesn't exist) instead of a syntax error. `SELECT currval('')` returns `currval\(\): invalid table name:`. [#70590][#70590]
-- It is now possible to cast [JSON](https://www.cockroachlabs.com/docs/v22.1/jsonb) booleans to the `BOOL` type, and to cast JSON numerics with fractions to rounded `INT` types. Error messages are now more clear when a cast from a JSON value to another type fails. [#70522][#70522]
-- Added a new SQL [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) `unordered_unique_rowid `, which generates a globally unique 64-bit integer that does not have ordering. [#70338][#70338]
-- Added a new [`serial_normalization` case](https://www.cockroachlabs.com/docs/v22.1/set-vars#supported-variables) `unordered_rowid `, which generates a globally unique 64-bit integer that does not have ordering. [#70338][#70338]
-- A hint is now provided when using a [`SERIAL4` type](https://www.cockroachlabs.com/docs/v22.1/serial) that gets upgraded to a `SERIAL8` due to the `serial_normalization` session variable requiring an `INT8` to succeed. [#70656][#70656]
-- Improved the error message to identify the column and data type when users try to select a named field from an anonymous record that has no labels. [#70726][#70726]
-- Implemented `pg_statistic_ext` on [`pg_catalog`](https://www.cockroachlabs.com/docs/v22.1/pg-catalog#data-exposed-by-pg_catalog). [#70591][#70591]
-- Implemented `pg_shadow` at [`pg_catalog`](https://www.cockroachlabs.com/docs/v22.1/pg-catalog#data-exposed-by-pg_catalog). [#68255][#68255]
-- Disallowed cross-database references for sequences by default. This can be enabled with the [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `sql.cross_db_sequence_references.enabled`. [#70581][#70581]
-- Added the ability to comment on SQL table [constraints](https://www.cockroachlabs.com/docs/v22.1/constraints) using PostgreSQL's `COMMENT ON CONSTRAINT` syntax. [#69783][#69783]
-- Added a `WITH COMMENT` clause to the [`SHOW CONSTRAINT`](https://www.cockroachlabs.com/docs/v22.1/show-constraints) statement that causes constraint comments to be displayed. [#69783][#69783]
-- Added empty stubs for tables and columns. Tables: `pg_statistic`, `pg_statistic_ext_data`, `pg_stats`, `pg_stats_ext`. Columns: `pg_attribute.attmissingval`. [#70865][#70865]
-- Previously, the behavior of [casting](https://www.cockroachlabs.com/docs/v22.1/data-types#data-type-conversions-and-casts) an [`INT`](https://www.cockroachlabs.com/docs/v22.1/int) to `CHAR` was similar to `BPCHAR` where only the first digit of the integer was returned. Now casting `INT` to `CHAR` will be interpreted as an ASCII byte, which aligns the overall behavior more with PostgreSQL. [#70942][#70942]
-- A parameter of type `CHAR` can now be used as a parameter in a prepared statement. [#70942][#70942]
-- The `information_schema._pg_numeric_precision`, `information_schema._pg_numeric_precision_radix`, and `information_schema._pg_numeric_scale` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) are now supported, which improves compatibility with PostgreSQL. [#70881][#70881]
-- If the time zone is set in a GMT offset, for example `+7` or `-11`, the timezone will be formatted as `<+07>-07` and `<-11>+11` respectively instead of `+7`, `-11`. This most notably shows up when doing [`SHOW TIME ZONE`](https://www.cockroachlabs.com/docs/v22.1/show-vars#supported-variables). [#70716][#70716]
-- `NULLS FIRST` and `NULLS LAST` specifiers are now supported for [`ORDER BY`](https://www.cockroachlabs.com/docs/v22.1/order-by). [#71083][#71083]
-- Added `SHOW CREATE ALL SCHEMAS` to allow the user to retrieve [`CREATE` statements](https://www.cockroachlabs.com/docs/v22.1/create-schema) to recreate the schemas of the current database. A flat log of the `CREATE` statements for schemas is returned. [#71138][#71138]
-- The session variable `inject_retry_errors_enabled` has been added. When this is true, any statement that is a not a [`SET`](https://www.cockroachlabs.com/docs/v22.1/set-vars) statement will return a [transaction retry error](https://www.cockroachlabs.com/docs/v22.1/transaction-retry-error-reference) if it is run inside of an explicit transaction. If the client retries the transaction using the special `cockroach_restart` [`SAVEPOINT`](https://www.cockroachlabs.com/docs/v22.1/savepoint), then after the third error the transaction will proceed as normal. Otherwise, the errors will continue until `inject_retry_errors_enabled` is set to false. The purpose of this setting is to allow users to test their transaction retry logic. [#71357][#71357]
-- Arrays of [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) data types can now be compared. [#71427][#71427]
-- `NULLS` can be ordered [`NULLS LAST `](https://www.cockroachlabs.com/docs/v22.1/order-by#parameters) by default if the `null_ordered_last` [session variable](https://www.cockroachlabs.com/docs/v22.1/show-vars#supported-variables) is set to true. [#71429][#71429]
-- Previously, comparing against [`bytea[]`](https://www.cockroachlabs.com/docs/v22.1/bytes) without a cast (e.g., `SELECT * FROM t WHERE byteaarrcol = '{}'`) would result in an ambiguous error. This has now been resolved. [#71501][#71501]
-- Previously, placeholders in an [`ARRAY`](https://www.cockroachlabs.com/docs/v22.1/array) (e.g., `SELECT ARRAY[$1]::int[]`) would resolve in an ambiguous error. This has now been fixed. [#71432][#71432]
-- [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) output now displays the limit hint when it is nonzero as part of the `estimated row count` field. [#71299][#71299]
-- Implicit casts performed during [`INSERT`](https://www.cockroachlabs.com/docs/v22.1/insert) statements now more closely follow PostgreSQL's behavior. Several minor bugs related to these types of casts have been fixed. [#70722][#70722]
-- Newly created tables now have `_pkey` by default as their index/constraint name. [#70604][#70604]
-- A newly created [`FOREIGN KEY`](https://www.cockroachlabs.com/docs/v22.1/foreign-key) now has the same constraint name as PostgreSQL— `
__fkey`. Previously, this was `fk__ref_`. [#70658][#70658]
-- `CURRENT_USER` and `SESSION_USER` can now be used as the role identifier in [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v22.1/alter-role) statements. [#71498][#71498]
-- Array [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) can now be used with arrays of [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum). [#71482][#71482]
-- Introduced an implicitly defined type for every table, which resolves to a `TUPLE` type that contains all of the columns in the table. [#70100][#70100]
-- The [`WITH RECURSIVE`](https://www.cockroachlabs.com/docs/v22.1/common-table-expressions#recursive-common-table-expressions) variant that uses `UNION` (as opposed to `UNION ALL`) is now supported. [#71685][#71685]
-- Infinite decimal values can now be encoded when sending data to/from the client. The encoding matches the PostgreSQL encoding. [#71772][#71772]
-- Previously, certain [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) built-in functions or operators required an explicit `ENUM` cast. This has been reduced in some cases. [#71653][#71653]
-- Removed the [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `sql.defaults.interleaved_tables.enabled` as interleaved support is now fully removed. [#71537][#71537]
-- `T_unknown` ParameterTypeOIDs in the PostgreSQL frontend/backend protocol are now correctly handled. [#71971][#71971]
-- [String literals](https://www.cockroachlabs.com/docs/v22.1/sql-constants#string-literals) can now be parsed as tuples, either in a cast expression, or in other contexts like function arguments. [#71916][#71916]
-- Added the [function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) `crdb_internal.reset_index_usage_stats()` to clear index usage stats. This can be invoked from the SQL shell. [#71896][#71896]
-- Custom session options can now be used, i.e., any [session variable](https://www.cockroachlabs.com/docs/v22.1/show-vars) that has `.` in the name. [#71915][#71915]
-- Added logic to process an `EXPORT PARQUET` statement. [#71868][#71868]
-- Added ability to `EXPORT PARQUET` for relations with `FLOAT`, `INT`, and `STRING` column types. [#71868][#71868]
-- This change removes support for: `IMPORT TABLE ... CREATE USING` and `IMPORT TABLE ... DATA`. `` refers to CSV, Delimited, PGCOPY, AVRO. These formats do not define the table schema in the same file as the data. The workaround following this feature removal is to use [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v22.1/create-table) with the same schema that was previously being passed into the [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) statement, followed by an [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v22.1/import-into) the newly created table. [#71058][#71058]
-- Previously, running [`COMMENT ON CONSTRAINT`](https://www.cockroachlabs.com/docs/v22.1/comment-on) on a table in a schema would succeed but the comment would not actually be created. Now the comment is successfully created. [#71985][#71985]
-- `INTERLEAVE IN PARENT` is permanently removed from CockroachDB. [#70618][#70618]
-- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze) now shows maximum allocated memory and maximum SQL temp disk usage for a statement. [#72113][#72113]
-- Added `SHOW CREATE ALL TYPES` to allow the user to retrieve the statements to recreate user-defined types of the current database. It returns a flat log of the `CREATE` statements for types. [#71326][#71326]
-- It is now possible to swap names (for tables, etc.) in the same transaction. For example:
-
- ~~~ sql
- CREATE TABLE foo();
- BEGIN;
- ALTER TABLE foo RENAME TO bar;
- CREATE TABLE foo();
- COMMIT;
- ~~~
- Previously, the user would receive a "relation ... already exists" error. [#70334][#70334]
-
-- To align with PostgreSQL, casting an OID type with a value of `0` to a `regtype`, `regproc`, `regclass`, or `regnamespace` now will convert the value to the string `-`. The reverse behavior is implemented too, so a `-` will become `0` if casted to a `reg` OID type. [#71873][#71873]
-- Implemented the `date_part` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) for better compatibility with PostgreSQL. [#72502][#72502]
-- `PRIMARY KEY`s have been renamed to conform to PostgreSQL (e.g., `@tbl_col1_col2_pkey`) in this release. To protect certain use cases of backward compatibility, we also allow `@primary` index hints to alias to the `PRIMARY KEY`, but only if no other index is named `primary`. [#72534][#72534]
-- Some filesystem-level properties are now exposed in `crdb_internal.kv_store_status`. Note that the particular fields and layout are not stabilized yet. [#72435][#72435]
-- Introduced a [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) `crdb_internal.init_stream` and a [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `stream_replication.job_liveness_timeout`. [#72330][#72330]
-- A notice is now issued when creating a [foreign key](https://www.cockroachlabs.com/docs/v22.1/foreign-key) referencing a column of a different width. [#72545][#72545]
-- Newly created databases will now have the `CONNECT` privilege granted by default to the `PUBLIC` role. [#72595][#72595]
-- [SQL Stats metrics](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-statistics) with `*_internal` suffix in their labels are now removed. [#72667][#72667]
-- `system.table_statistics` has an additional field, `avgSize`, that is the average size in bytes of the column(s) with `columnIDs`. The new field is visible with the command `SHOW STATISTICS FOR TABLE`, as with other table statistics. This field is not yet used by the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#table-statistics) as part of cost modeling. [#72365][#72365]
-- Added the modifier `IF NOT EXISTS` to `ALTER TABLE ... ADD CONSTRAINT IF NOT EXISTS`. [#71257][#71257]
-- Fixed [`gateway_region` built-in](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) for `--multitenant` demo clusters. [#72734][#72734]
-- Prior to this change it was possible to alter a column's type in a way that was not compatible with the [`DEFAULT`](https://www.cockroachlabs.com/docs/v22.1/default-value) or [`ON UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update) clause. This would cause parsing errors within tables. Now the `DEFAULT` or `ON UPDATE` clause is checked. [#71423][#71423]
-- Added [`CREATE SEQUENCE AS `](https://www.cockroachlabs.com/docs/v22.1/create-sequence) option. [#57339][#57339]
-- Introduced new SQL syntax [`ALTER RANGE RELOCATE`](https://www.cockroachlabs.com/docs/v22.1/configure-zone) to move a lease or replica between stores. This is helpful in an emergency situation to relocate data in the cluster. [#72305][#72305]
-- [`EXPORT PARQUET`](https://www.cockroachlabs.com/docs/v22.1/export) can now export relations with `NULL` values to Parquet files. [#72530][#72530]
-- Previously, [`ALTER TABLE ... RENAME TO ...`](https://www.cockroachlabs.com/docs/v22.1/alter-table#subcommands) would allow the user to move the table from a database to another if the table is being moved within one database's public schema to another. This is now disallowed. [#72000][#72000]
-- [`ALTER DATABASE CONVERT TO SCHEMA`](https://www.cockroachlabs.com/docs/v22.1/alter-database#subcommands) is now disabled in v22.1 and later. [#72000][#72000]
-- It is now possible to specify a different path for [incremental backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups). [#72713][#72713]
-- If the `WITH GRANT OPTION` flag is present when granting privileges to a user, then that user is able to grant those same privileges to subsequent users; otherwise, they cannot. If the `GRANT OPTION FOR` flag is present when revoking privileges from a user, then only the ability to grant those privileges is revoked from that user, not the privileges themselves (otherwise both the privileges and the ability to grant those privileges are revoked). This behavior is consistent with PostgreSQL. [#72123][#72123]
-- Disallowed `ST_MakePolygon` making empty polygons from empty [`LINESTRING`](https://www.cockroachlabs.com/docs/v22.1/linestring). This is not allowed in PostGIS. [#73489][#73489]
-- [`EXPORT PARQUET`](https://www.cockroachlabs.com/docs/v22.1/export) now preserves column names and nullability. [#73382][#73382]
-- Previously, the output from [`SHOW CREATE VIEW`](https://www.cockroachlabs.com/docs/v22.1/show-create#show-the-create-view-statement-for-a-view) returned on a single line. The format has now been improved to be more readable. [#73642][#73642]
-- The output of the [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) SQL statement has changed. Below the plan, index recommendations are now outputted for the SQL statement in question, if there are any. These index recommendations are indexes the user could add or indexes they could replace to make the given query faster. [#73302][#73302]
-- The [`VOID` type](https://www.cockroachlabs.com/docs/v22.1/data-types) is now recognized. [#73488][#73488]
-- In the experimental [`RELOCATE`](https://www.cockroachlabs.com/docs/v22.1/cockroachdb-feature-availability) syntax forms, the positional keyword that indicates that the statement should move non-voter replicas is now spelled `NONVOTERS`, instead of `NON_VOTERS`. [#73803][#73803]
-- The inline help for the `ALTER` statements now mentions the `RELOCATE` syntax. [#73803][#73803]
-- The experimental `ALTER RANGE...RELOCATE` syntax now accepts arbitrary [scalar expressions](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions) as the source and target store IDs. [#73807][#73807]
-- The output of `EXPLAIN ALTER RANGE ... RELOCATE` now includes the source and target store IDs. [#73807][#73807]
-- The experimental `ALTER RANGE...RELOCATE` syntax now accepts arbitrary [scalar expressions](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions) as the range ID when the `FOR` clause is not used. [#73807][#73807]
-- The output of `EXPLAIN ALTER RANGE ... RELOCATE` now includes which replicas are subject to the relocation. [#73807][#73807]
-- [`ALTER DEFAULT PRIVILEGES IN SCHEMA `](https://www.cockroachlabs.com/docs/v22.1/alter-default-privileges) is now supported. As well as specifying default privileges globally (within a database), users can now specify default privileges in a specific schema. When creating an object that has default privileges specified at the database (global) and at the schema level, the union of the default privileges is taken. [#73576][#73576]
-- Index recommendations can be omitted from the [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) plan if the `index_recommendations_enabled` session variable is set to false. [#73346][#73346]
-- The output of `EXPLAIN ALTER INDEX/TABLE ... RELOCATE/SPLIT` now includes the target table/index name and, for the [`SPLIT AT`](https://www.cockroachlabs.com/docs/v22.1/split-at) variants, the expiry timestamp. [#73832][#73832]
-- Added the `digest` and `hmac` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators). They match the PostgreSQL (pgcrypto) implementation. Supported hash algorithms are `md5`, `sha1`, `sha224`, `sha256`, `sha384`, and `sha512`. [#73935][#73935]
-- Users can now [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) (locality-aware) [incremental backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) created with the `incremental_storage` parameter. [#73744][#73744]
-- Improved [cost model](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) for TopK expressions if the input to TopK can be partially ordered by its sort columns. [#73459][#73459]
-- Added the `incremental_storage` option to [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup) so users can now observe [incremental backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups). [#73357][#73357]
-- Previously, escape character processing (`\`) was missing from constraint span generation, which resulted in incorrect results when doing escaped lookups. This is now fixed. [#73978][#73978]
-- The shard column of a [hash-sharded index](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) is now a [virtual column](https://www.cockroachlabs.com/docs/v22.1/computed-columns) and not a stored computed column. [#74138][#74138]
-- Clients waiting for a [schema change](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) job will now receive an error if the job they are waiting for is paused. [#74157][#74157]
-- The [`GRANT`](https://www.cockroachlabs.com/docs/v22.1/grant#supported-privileges) privilege is deprecated in v22.1 and will be removed in v22.2 in favor of grant options. To promote backward compatibility for users with code still using `GRANT`, we will give grant options on every privilege a user has when they are granted `GRANT` and remove all their grant options when `GRANT` is revoked, in addition to the existing grant option behavior. [#74210][#74210]
-- `system.protected_timestamp_records` table now has an additional `target` column that will store an encoded protocol buffer that represents the target a record protects. This target can either be the entire cluster, tenants, or schema objects (databases/tables). [#74281][#74281]
-- The KV tracing of SQL queries (that could be obtained with `\set auto_trace=on,kv`) has been adjusted slightly. Previously, CockroachDB would fully decode the key in each key-value pair, even if some part of the key would not be decoded while tracing was enabled. Now, CockroachDB does not perform any extra decoding, and parts of the key that are not decoded are replaced with `?`. [#74236][#74236]
-- CockroachDB now supports `default_with_oids`, which only accepts a `false` value. [#74499][#74499]
-- [`EXPORT PARQUET`](https://www.cockroachlabs.com/docs/v22.1/export) can export columns of type [array](https://www.cockroachlabs.com/docs/v22.1/data-types) [#73735][#73735]
-- Statements are now formatted prior to being sent to [the DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page). This is done using a new [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) that formats statements. [#73853][#73853]
-
-
Operational changes
-
-- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-zip) now includes the raw `system.settings` table. This table makes it possible to determine whether a [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) has been explicitly set. [#70498][#70498]
-- The meaning of `sql.distsql.max_running_flows` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) has been extended so that when the value is negative, it will be multiplied by the number of CPUs on the node to get the maximum number of concurrent remote flows on the node. The default value is `-128`, meaning that a 4-CPU machine will have up to `512` concurrent remote DistSQL flows, but a 8-CPU machine will have up to `1024`. The previous default was `500`. [#71787][#71787]
-- Some existing settings related to [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup) execution are now listed by [`SHOW CLUSTER SETTING`](https://www.cockroachlabs.com/docs/v22.1/show-cluster-setting). [#71962][#71962]
-- The [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) affecting the admission control system enablement are now set to defaults that enable [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control). [#68535][#68535]
-- The default value of the `kv.rangefeed.catchup_scan_iterator_optimization.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) is now `true`. [#73473][#73473]
-- Added a metric `addsstable.aswrites` that tracks the number of `AddSSTable` requests ingested as regular write batches. [#73910][#73910]
-- Added a metric `replica.uninitialized` that tracks the number of `Uninitialized` replicas in a store. [#73975][#73975]
-
-
Command-line changes
-
-- [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) will now begin processing scheduled jobs after 15 seconds, instead of the 2–5 minutes in a production environment. [#70242][#70242]
-- The 25 max QPS rate limit for workloads on [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) can now be configured with a `--workload-max-qps` flag. [#70642][#70642]
-- The SQL shell now supports the `\du USER` command to show information for the current user. [#70609][#70609]
-- Added support for a CLI shortcut that displays [constraint](https://www.cockroachlabs.com/docs/v22.1/constraints) information similar to PostgreSQL. The shortcut is `\dd TABLE`. [#69783][#69783]
-- Added a `--read-only` flag to [`cockroach sql`](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) which will set the `default_session_read_only` variable upon connecting. This is effectively equivalent to the `PGTARGETSESSIONATTRS=read-only` option added to libpq and `psql` in PostgreSQL 13. [#71003][#71003]
-- Previously, [`cockroach debug merge-logs`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-merge-logs) output was prefixed by a short machine name by default, which made it difficult to identify the originating node when looking at the merged results. CockroachDB now supports `"${fpath}"` in the `--prefix` argument. [#71254][#71254]
-- Added an option in the [`cockroach demo movr`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) command to populate the `user_promo_code` table. [#61531][#61531]
-- Allowed demoing of CockroachDB's multi-tenant features via the `--multitenant` flag to [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo). [#71026][#71026]
-- [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) now runs by default in multi-tenant mode. [#71988][#71988]
-- Added buffering to log sinks. This can be configured with the new `"buffering"` field on any log sink provided via the `--log` or `--log-config-file` flags. [#70330][#70330]
-- The server identifiers (cluster ID, node ID, tenant ID, instance ID) are no longer duplicated at the start of every new [log file](https://www.cockroachlabs.com/docs/v22.1/configure-logs#output-to-files) (during log file rotations). They are now only logged when known during server start-up. (The copy of the identifiers is still included in per-event envelopes for the various [`json` output logging formats](https://www.cockroachlabs.com/docs/v22.1/log-formats#format-json).) [#73306][#73306]
-- The [`cockroach node drain`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) command is now able to drain a node by ID, specified on the command line, from another node in the cluster. It now also supports the flag `--self` for symmetry with [`node decommission`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node#node-decommission). Using `node drain` without either `--self` or a node ID is now deprecated. [#73991][#73991]
-- The deprecated command `cockroach quit` now accepts the flags `--self` and the ability to specify a node ID like `cockroach node drain`. Even though the command is deprecated, this change was performed to ensure symmetry in the documentation until the command is effectively removed. [#73991][#73991]
-- Not finding the right certificates in the `certs` directory, or not specifying a `certs` directory or certificate path, will now fall back on checking server CA using Go's TLS code to find the certificates in the OS trust store. If no matching certificate is found, then an `x509` error will occur announcing that the certificate is signed by an unknown authority. [#73776][#73776]
-
-
API endpoint changes
-
-- [`CREATE CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/create-changefeed) on a cloud storage sink now allows a new query parameter to specify how the file paths are partitioned. For example, `partition_format="daily"` represents the default behavior of splitting into dates `(2021-05-01/)`. While `partition_format="hourly"` will further partition them by hour `(2021-05-01/05/)`. `partition_format="flat"` will not partition at all. [#70207][#70207]
-- [OpenID Connect (OIDC)](https://www.cockroachlabs.com/docs/v22.1/sso#cluster-settings) support for DB Console is no longer marked as `experimental`. [#71183][#71183]
-- Added new API endpoint for getting a table's index statistics. [#72660][#72660]
-- Added a new batch RPC, and batch method counters are now visible in DB Console and [`_status/vars`](https://www.cockroachlabs.com/docs/v22.1/monitoring-and-alerting). [#72767][#72767]
-
-
DB Console changes
-
-- Fixed drag to zoom on [custom charts](https://www.cockroachlabs.com/docs/v22.1/ui-custom-chart-debug-page#use-the-custom-chart-page). [#70229][#70229]
-- Fixed drag to time range for a specific window issue. [#70326][#70326]
-- Added pre-sizing calculation for [**Metrics**](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard) page graphs. [#70838][#70838]
-- The `/debug/pprof/goroutineui/` page has a new and improved look. [#71690][#71690]
-- The all nodes report now notifies a user if they need more privileges to view the page's information. [#71960][#71960]
-- The [**Advanced Debug**](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages) page now contains an additional link under the **Metrics** header called Rules. This endpoint exposes [Prometheus-compatible alerting](https://www.cockroachlabs.com/docs/v22.1/monitoring-and-alerting#events-to-alert-on) and aggregation rules for CockroachDB metrics. [#72677][#72677]
-- Added an **Index Stats** table and a button to clear index usage stats on the [Table Details](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page#table-details) page for each table. [#72948][#72948]
-- Added the ability to remove the dashed underline from sorted table headers for headers with no tooltips. Removed the dashed underline from the **Index Stats** table headers. [#73455][#73455]
-- Added a new [**Index Details**](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page#index-details) page, which exists for each index on a table. [#73178][#73178]
-- Updated the **Reset Index Stats** button text to be more clear. [#73700][#73700]
-- The time pickers on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages now have the same style and functionality as the time picker on the [**Metrics**](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard) page. [#73608][#73608]
-- The **clear SQL stats** links on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages were relabeled **reset SQL stats**, for consistency with the language in the SQL shell. [#73922][#73922]
-- Added the ability to create conditional statement diagnostics by adding two new fields: 1) minimum execution latency, which specifies the limit for when a statement should be tracked, and 2) expiry time, which specifies when a diagnostics request should expire. [#74112][#74112]
-- The **Terminate Session** and **Terminate Query** buttons are again available to be enabled on the [**Sessions Page**](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page). [#74408][#74408]
-- Added formatting to statements on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page), [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page), and **Index Details** pages. [#73853][#73853]
-- Updated colors for **Succeeded** badges and the progress bar on the [**Jobs**](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) page. [#73924][#73924]
-
-
Bug fixes
-
-- Fixed a bug where `CURRENT_USER` and [`SESSION_USER`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#special-syntax-forms) were parsed incorrectly. [#70439][#70439]
-- Fixed a bug where [index](https://www.cockroachlabs.com/docs/v22.1/indexes)/[partition](https://www.cockroachlabs.com/docs/v22.1/partitioning) subzones may not have inherited the `global_reads` field correctly in some cases from their parent. [#69983][#69983]
-- Previously, [`DROP DATABASE CASCADE`](https://www.cockroachlabs.com/docs/v22.1/drop-database) could fail while resolving a schema in certain scenarios with the following error: `ERROR: error resolving referenced table ID : descriptor is being dropped`. This is now fixed. [#69789][#69789]
-- [Backfills](https://www.cockroachlabs.com/docs/v22.1/changefeed-messages#schema-changes-with-column-backfill) will now always respect the most up-to-date value of `changefeed.backfill.concurrent_scan_requests` even during an ongoing backfill. [#69933][#69933]
-- The [`cockroach debug merge-logs`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-merge-logs) command no longer returns an error when the log decoder attempts to parse older logs. [#68282][#68282]
-- The PostgreSQL-compatible "Access Privilege Inquiry Functions" (e.g., `has_foo_privilege`) were incorrectly returning whether all comma-separated privileges were held, instead of whether any of the provided privileges were held. This incompatibility has been resolved. [#69939][#69939]
-- Queries involving arrays of tuples will no longer spuriously fail due to an encoding error. [#63996][#63996]
-- [`cockroach sql -e`](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) (and `demo -e`) can now process all client-side commands, not just `\echo`, `\set`, and a few others. [#70671][#70671]
-- [`cockroach sql --set=auto_trace=on -e 'select ...'`](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) (and the similar `demo` command) now produces an execution trace properly. [#70671][#70671]
-- Previously, bulk `INSERT`/`UPDATE` in implicit transactions retried indefinitely if the statement exceeded the default leasing deadline of 5 minutes. Now, if the leasing deadline is exceeded this will be raised back up to the SQL layer to refresh the deadline before trying to commit. [#69936][#69936]
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) now respects the spatial index storage options specified in `PGDUMP` files on indexes it creates. [#66903][#66903]
-- Fixed `IMPORT` in [`tpcc`](https://www.cockroachlabs.com/docs/v22.1/cockroach-workload#tpcc-workload) workload. [#71013][#71013]
-- Some query patterns that previously could cause a single node to become a [hotspot](https://www.cockroachlabs.com/docs/v22.1/query-behavior-troubleshooting#single-hot-node) have been fixed so that the load is evenly distributed across the whole cluster. [#70648][#70648]
-- Fixed a bug where the 2-parameter `setval` built-in function previously caused the [sequence](https://www.cockroachlabs.com/docs/v22.1/create-sequence) to increment incorrectly one extra time. For a sequence to increment, use `setval(seq, val, true)`. [#71643][#71643]
-- Previously, the effects of the `setval` and `nextval` built-in functions would be rolled back if the surrounding transaction was rolled back. This was not correct, as `setval` is not supposed to respect transaction boundaries. This is now fixed. [#71643][#71643]
-- In v21.2, jobs that fail to revert are retried unconditionally, but with exponential backoff. In the mixed-version state there is no exponential backoff, so it would not be good to retry unconditionally. The behavior has been changed such that before v21.2 is finalized, these jobs will enter the revert-failed state as in v21.1. [#71780][#71780]
-- Fixed a bug that prevented rollback of [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v22.1/alter-primary-key) when the old primary key was interleaved. [#71780][#71780]
-- Previously, adding new values to a user-defined [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) type would cause a prepared statement using that type to not work. This now works as expected. [#71632][#71632]
-- Previously, when records and [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) types containing escape sequences were shown in the CLI, they would be incorrectly double-escaped. This is now fixed. [#71916][#71916]
-- `SCHEMA CHANGE` and `SCHEMA CHANGE GC` jobs following a `DROP ... CASCADE` now have sensible names, instead of `''` and `'GC for '`, respectively. [#70630][#70630]
-- Fixed a race condition that could have caused [core changefeeds](https://www.cockroachlabs.com/docs/v22.1/changefeed-for) whose targeted table became invalid to not explain why when shutting down. [#72490][#72490]
-- [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) can now be launched with `--global` and `--multitenant=true` options. [#72750][#72750]
-- Y-axis labels on [custom charts](https://www.cockroachlabs.com/docs/v22.1/ui-custom-chart-debug-page#use-the-custom-chart-page) no longer display `undefined`. [#73055][#73055]
-- Raft snapshots now detect timeouts earlier and avoid spamming the logs with [`context deadline exceeded`](https://www.cockroachlabs.com/docs/v22.1/common-errors#context-deadline-exceeded) errors. [#73279][#73279]
-- Error messages produced during import are now truncated. Previously, [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) could potentially generate large error messages that could not be persisted to the jobs table, resulting in a failed import never entering the failed state and instead retrying repeatedly. [#73303][#73303]
-- Servers no longer crash due to panics in HTTP handlers. [#72395][#72395]
-- `crdb_internal.table_indexes` now shows if an index is sharded or not. [#73380][#73380]
-- Previously, creating indexes with special characters would fail to identify indexes with the same matching name, which caused an internal error. This is now fixed. [#73367][#73367]
-- CockroachDB now prohibits mixed dimension [`LINESTRING`](https://www.cockroachlabs.com/docs/v22.1/linestring) in `ST_MakePolygon`. [#73489][#73489]
-- Index `CREATE` statements in the `pg_indexes` table now shows a hash-sharding bucket count if an index is hash sharded. Column direction is removed from `gin` index in `pg_indexes`. [#73491][#73491]
-- Uninitialized replicas that are abandoned after an unsuccessful snapshot no longer perform periodic background work, so they no longer have a non-negligible cost. [#73362][#73362]
-- Fixed a bug that caused incorrect evaluation of placeholder values in `EXECUTE` statements. The bug presented when the `PREPARE` statement cast a placeholder value, e.g., `PREPARE s AS SELECT $1::INT2`. If the assigned value for `$1` exceeded the maximum width value of the cast target type, the result value of the cast could be incorrect. This bug had been present since v19.1 or earlier. [#73762][#73762]
-- Previously, during [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) `system.namespace` entry wouldn't be inserted for synthetic public schemas. This is now fixed. [#73875][#73875]
-- Fixed a bug that caused internal errors when altering the primary key of a table. The bug was only present if the table had a partial index with a predicate that referenced a [virtual computed column](https://www.cockroachlabs.com/docs/v22.1/computed-columns). This bug was present since virtual computed columns were added in v21.1.0. [#74102][#74102]
-- [Foreign keys](https://www.cockroachlabs.com/docs/v22.1/foreign-key) referencing a hash-sharded key will not fail anymore. [#74140][#74140]
-- Raft snapshots no longer risk starvation under very high concurrency. Before this fix, it was possible that many of Raft snapshots could be starved and prevented from succeeding due to timeouts, which were accompanied by errors like [`error rate limiting bulk io write: context deadline exceeded`](https://www.cockroachlabs.com/docs/v22.1/common-errors#context-deadline-exceeded). [#73288][#73288]
-- Portals in the extended protocol of the PostgreSQL wire protocol can now be used from implicit transactions and can be executed multiple times if there is a row-count limit applied to the portal. Previously, trying to execute the same portal twice would result in an `unknown portal` error. [#74242][#74242]
-- Fixed a bug that incorrectly allowed creating [computed column](https://www.cockroachlabs.com/docs/v22.1/computed-columns) expressions, expression indexes, and partial index predicate expressions with mutable casts between [`STRING` types](https://www.cockroachlabs.com/docs/v22.1/data-types) and the types `REGCLASS`, `REGNAMESPACE`, `REGPROC`, `REGPROCEDURE`, `REGROLE`, and `REGTYPE`. Creating such computed columns, expression indexes, and partial indexes is now prohibited. Any tables with these types of expressions may be corrupt and should be dropped and recreated. [#74286][#74286]
-- Fixed a bug that, in very rare cases, could result in a node terminating with a fatal error: `unable to remove placeholder: corrupted replicasByKey map`. To avoid potential data corruption, users affected by this crash should not restart the node, but instead [decommission it](https://www.cockroachlabs.com/docs/v22.1/node-shutdown?filters=decommission) in absentia and have it rejoin the cluster under a new `nodeID`. [#73734][#73734]
-- Previously, when [foreign keys](https://www.cockroachlabs.com/docs/v22.1/foreign-key) were included inside an `ADD COLUMN` statement and multiple columns were added in a single statement then the first added column would have the foreign key applied (or an error generated based on the wrong column). This is now fixed. [#74411][#74411]
-- Previously, a double-nested [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) in a DistSQL query would not get hydrated on remote nodes resulting in panic. This is now fixed. [#74189][#74189]
-- Fixed a panic when attempting to access the hottest ranges (e.g., via the `/_status/hotranges` endpoint) before initial statistics had been gathered. [#74507][#74507]
-- Previously, setting [`sslmode=require`](https://www.cockroachlabs.com/docs/v22.1/connection-parameters#additional-connection-parameters) would check for local certificates, so omitting a certs path would cause an error even though `require` does not verify server certificates. This has been fixed by bypassing certificate path checking for `sslmode=require`. This bug had been present since v21.2.0. [#73776][#73776]
-- Previously, incorrect results would be returned, or internal errors, on queries with window functions returning [`INT`](https://www.cockroachlabs.com/docs/v22.1/data-types), `FLOAT`, `BYTES`, `STRING`, `UUID`, or `JSON` type when the [disk spilling](https://www.cockroachlabs.com/docs/v22.1/vectorized-execution#disk-spilling-operations) occurred. The bug was introduced in v21.2.0 and is now fixed. [#74491][#74491]
-- Previously, `MIN`/`MAX` could be incorrectly calculated when used as window functions in some cases after [spilling to disk](https://www.cockroachlabs.com/docs/v22.1/vectorized-execution#disk-spilling-operations). The bug was introduced in v21.2.0 and is now fixed. [#74491][#74491]
-- Previously, [`IMPORT TABLE ... PGDUMP`](https://www.cockroachlabs.com/docs/v22.1/import#import-a-table-from-a-postgresql-database-dump) with a `COPY FROM` statement in the dump file that has less target columns than the `CREATE TABLE` schema definition would result in a nil pointer exception. This is now fixed. [#74601][#74601]
-
-
Performance improvements
-
-- Mutation statements with a `RETURNING` clause that are not inside an explicit transaction are faster in some cases. [#70200][#70200]
-- Added collection of basic table statistics during an [import](https://www.cockroachlabs.com/docs/v22.1/import), to help the optimizer until full statistics collection completes. [#67106][#67106]
-- The accuracy of histogram calculations for `BYTES` types has been improved. As a result, the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) should generate more efficient query plans in some cases. [#68740][#68740]
-- A [`SELECT`](https://www.cockroachlabs.com/docs/v22.1/selection-queries) query with both `MIN(LeadingIndexColumn)` and `MAX(LeadingIndexColumn)` can now be performed with two `LIMITED SCAN`s instead of a single `FULL SCAN`. [#70496][#70496]
-- A [`SELECT`](https://www.cockroachlabs.com/docs/v22.1/selection-queries) query from a single table with more than one `MIN` or `MAX` scalar aggregate expression and a `WHERE` clause can now be performed with `LIMITED SCAN`s, one per aggregate expression, instead of a single `FULL SCAN`. Note: No other aggregate function, such as `SUM`, may be present in the query block in order for it to be eligible for this transformation. This optimization should occur when each `MIN` or `MAX` expression involves a leading index column, so that a sort is not required for the limit operation, and the resulting query plan will appear cheapest to the optimizer. [#70854][#70854]
-- Queries with many ORed `WHERE` clause predicates previously took an excessive amount of time for the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) to process, especially if the predicates involved index columns, and if there were more than 1000 predicates (which could happen with application-generated SQL). To fix this, the processing of SQL with many ORed predicates have been optimized to make sure a query plan can be generated in seconds instead of minutes or hours. [#71247][#71247]
-- Creating many [schema changes](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) in parallel now runs faster due to improved concurrency notifying the jobs subsystem. [#71909][#71909]
-- The `sqlinstance` subsystem no longer reads from the backing SQL table for every request for SQL instance details. This will result in improved performance for supporting multi-region setup for the multi-tenant architecture. [#69976][#69976]
-- Improved efficiency of looking up old historical descriptors. [#71239][#71239]
-- Improved performance of some `GROUP BY` queries with a `LIMIT` if there is an index ordering that matches a subset of the grouping columns. In this case the total number of aggregations needed to satisfy the `LIMIT` can be emitted without scanning the entire input, enabling the execution to be more effective. [#71546][#71546]
-- [`var_pop` and `stddev_pop`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) aggregate functions are now evaluated more efficiently in a distributed setting. [#73712][#73712]
-- Improved job performance in the face of concurrent schema changes by reducing contention. [#72297][#72297]
-- [Incremental backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) now use less memory to verify coverage of prior backups. [#74393][#74393]
-- CockroachDB now retrieves the password credentials of the SQL client concurrently without waiting for the password response during the authentication exchange. This can yield a small latency reduction in new SQL connections. [#74365][#74365]
-- CockroachDB now allows rangefeed streams to use separate http connection when `kv.rangefeed.use_dedicated_connection_class.enabled` setting is turned on. Using separate connection class reduces the possibility of OOMs when running rangefeeds against very large tables. The connection window size for rangefeeds can be adjusted via `COCKROACH_RANGEFEED_INITIAL_WINDOW_SIZE` environment variable, whose default is 128KB. [#74222][#74222]
-- The merging of [incremental backup](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) layers during [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) now uses a simpler and less memory-intensive algorithm. [#74394][#74394]
-- The default snapshot recovery/rebalance rates `kv.snapshot_rebalance.max_rate` and `kv.snapshot_recovery.max_rate` were bumped from 8MB/s to 32MB/s. Production experience has taught us that earlier values were too conservative. Users might observe higher network utilization during rebalancing/recovery in service of rebalancing/recovering faster (for the latter, possibly reducing the MTTF). If the extra utilization is undesirable, users can manually revert these rates back to their original settings of 8 MB/s. [#71814][#71814]
-
-
Build changes
-
-- Upgraded to new version of Go v1.17. [#69603][#69603]
-
-
Miscellaneous
-
-
Docker
-
-- Env variables and init scripts in `docker-entrypoint-initdb.d` for the [`start-single-node`](https://www.cockroachlabs.com/docs/v22.1/cockroach-start-single-node) command are now supported. [#70238][#70238]
-
-
-
-
Contributors
-
-This release includes 1720 merged PRs by 132 authors.
-
-We would like to thank the following contributors from the CockroachDB community:
-
-- Catherine J (first-time contributor)
-- Eudald (first-time contributor)
-- Ganeshprasad Biradar
-- Josh Soref (first-time contributor)
-- Max Neverov
-- Miguel Novelo (first-time contributor)
-- Paul Lin (first-time contributor)
-- Remy Wang (first-time contributor)
-- Rupesh Harode
-- TennyZhuang (first-time contributor)
-- Tharun
-- Ulf Adams
-- Zhou Fang (first-time contributor)
-- lpessoa (first-time contributor)
-- mnovelodou (first-time contributor)
-- neeral
-- shralex (first-time contributor)
-- tukeJonny (first-time contributor)
-
-
-
-- Non-standard [`cron`](https://wikipedia.org/wiki/Cron) expressions that specify seconds or year fields are no longer supported. [#74881][#74881]
-- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) will now filter out [virtual computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) from events by default. [#74916][#74916]
-- The [environment variable](https://www.cockroachlabs.com/docs/v22.1/cockroach-commands#environment-variables) that controls the max amount of CPU that can be taken by password hash computations during authentication was renamed from `COCKROACH_MAX_BCRYPT_CONCURRENCY` to `COCKROACH_MAX_PW_HASH_COMPUTE_CONCURRENCY`. Its semantics remain unchanged. [#74301][#74301]
-
-
Security updates
-
-- CockroachDB is now able to [authenticate users](https://www.cockroachlabs.com/docs/v22.1/security-reference/authentication) via the DB Console and through SQL sessions when the client provides a cleartext password and the stored credentials are encoded using the SCRAM-SHA-256 algorithm. Support for a SCRAM authentication flow is a separate feature and is not the target of this release note. In particular, for SQL client sessions it makes it possible to use the authentication methods `password` (cleartext passwords), and `cert-password` (TLS client cert or cleartext password) with either CRDB-BCRYPT or SCRAM-SHA-256 stored credentials. Previously, only CRDB-BCRYPT stored credentials were supported for cleartext password authentication. [#74301][#74301]
-- The hash method used to encode cleartext passwords before storing them is now configurable, via the new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `server.user_login.password_encryption`. Its supported values are `crdb-bcrypt` and `scram-sha-256`. The cluster setting only is enabled after all cluster nodes have been upgraded, at which point its default value is `scram-sha-256`. Prior to completion of the upgrade, the cluster behaves as if the cluster setting is set to `crdb-bcrypt` for backward compatibility. Note that the preferred way to populate password credentials for SQL user accounts is to pre-compute the hash client-side, and pass the precomputed hash via [`CREATE USER WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/create-user), [`CREATE ROLE WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/create-role), [`ALTER USER WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/alter-user), or [`ALTER ROLE WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/alter-role). This ensures that the server never sees the cleartext password. [#74301][#74301]
-- The cost of the hashing function for `scram-sha-256` is now configurable via the new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `server.user_login.password_hashes.default_cost.scram_sha_256`. Its default value is 119680, which corresponds to an approximate password check latency of 50-100ms on modern hardware. This value should be increased over time to reflect improvements to CPU performance: the latency should not become so small that it becomes feasible to brute force passwords via repeated login attempts. Future versions of CockroachDB will likely update this default value. [#74301][#74301]
-- When using the default HBA authentication method `cert-password` for SQL client connections, and the SQL client does not present a TLS client certificate to the server, CockroachDB now automatically upgrades the password handshake protocol to use SCRAM-SHA-256 if the user's stored password uses the SCRAM encoding. The previous behavior of requesting a cleartext password is still used if the stored password is encoded using the CRDB-BCRYPT format. An operator can force clients to _always_ request SCRAM-SHA-256 when a TLS client cert is not provided in order to guarantee the security benefits of SCRAM using the authentication methods `cert-scram-sha-256` (either TLS client cert _or_ SCRAM-SHA-256) and `scram-sha-256` (only SCRAM-SHA-256). As in previous releases, mandatory cleartext password authentication can be requested (e.g., for debugging purposes) by using the HBA method `password`. This automatic protocol upgrade can be manually disabled using the new cluster setting `server.user_login.cert_password_method.auto_scram_promotion.enable` and setting it to `false`. Disable automatic protocol upgrades if, for example, certain client drivers are found to not support SCRAM-SHA-256 authentication properly. [#74301][#74301]
-- In order to promote a transition to SCRAM-SHA-256 for password authentication, CockroachDB now automatically attempts to convert stored password hashes to SCRAM-SHA-256 after a cleartext password authentication succeeds if the target hash method configured via `server.user_login.password_encryption` is `scram-sha-256`. This auto-conversion can happen either during SQL logins or HTTP logins that use passwords, whichever occurs first. When an auto-conversion occurs, a structured event of type `password_hash_converted` is logged to the `SESSIONS` channel. The `PKBDF2` iteration count on the hash is chosen in order to preserve the latency of client logins, to remain similar to the latency incurred from the starting `bcrypt` cost. (For example, the default configuration of `bcrypt` cost 10 is converted to a SCRAM iteration count of 119680.) This choice, however, lowers the cost of brute forcing passwords for an attacker with access to the encoded password hashes, if they have access to ASICs or GPUs, by a factor of ~10. For example, if it would previously cost them $1,000,000 to brute force a `crdb-bcrypt` hash, it would now cost them "just" $100,000 to brute force the SCRAM-SHA-256 hash that results from this conversion. If an operator wishes to compensate for this, three options are available:
- 1. Set up their infrastructure such that only passwords with high entropy can be used. For example, this can be achieved by disabling the ability of end-users to select their own passwords and auto-generating passwords for the user, or enforcing some entropy checks during password selection. This way, the entropy of the password itself compensates for the lower hash complexity.
- 1. Manually select a higher `SCRAM` iteration count. This can be done either by pre-computing `SCRAM` hashes client-side and providing the pre-computed hash using `ALTER USER WITH PASSWORD`, or adjusting the cluster setting `server.user_login.password_hashes.default_cost.scram_sha_256` and asking CockroachDB to recompute the hash.
- 1. Disable the auto-conversion of `crdb-bcrypt` hashes to `scram-sha-256` altogether, using the new cluster setting `server.user_login.upgrade_bcrypt_stored_passwords_to_scram.enabled`. This approach is discouraged as it removes the other security protections offered by SCRAM authentication. The conversion also only happens if the target configured method via `server.user_login.password_encryption` is `scram-sha-256`, because the goal of the conversion is to move clusters towards using SCRAM. [#74301][#74301]
-- Added support for [query cancellation](https://www.cockroachlabs.com/docs/v22.1/cancel-query) via the `pgwire` protocol. Since this protocol is unauthenticated, there are a few precautions included.
- 1. The protocol requires that a 64-bit key is used to uniquely identify a session. Some of these bits are used to identify the CockroachDB node that owns the session. The rest of the bits are all random. If the node ID is small enough, then only 12 bits are used for the ID, and the remaining 52 bits are random. Otherwise, 32 bits are used for both the ID and the random secret.
- 1. A fixed per-node rate limit is used. There can only be at most 256 failed cancellation attempts per second. Any other cancel requests that exceed this rate are ignored. This makes it harder for an attacker to guess random cancellation keys. Specifically, if we assume a 32-bit secret and 256 concurrent sessions on a node, it would take 2^16 seconds (about 18 hours) for an attacker to be certain they have cancelled a query.
- 1. No response is returned for a cancel request. This makes it impossible for an attacker to know if their guesses are working. Unsuccessful attempts are [logged internally](https://www.cockroachlabs.com/docs/v22.1/logging-use-cases#security-and-audit-monitoring) with warnings. Large numbers of these messages could indicate malicious activity. [#67501][#67501]
-- The cluster setting `server.user_login.session_revival_token.enabled` has been added. It is `false` by default. If set to `true`, then a new token-based authentication mechanism is enabled. A token can be generated using the `crdb_internal.create_session_revival_token` built in [function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators). The token has a lifetime of 10 minutes and is cryptographically signed to prevent spoofing and brute forcing attempts. When initializing a session later, the token can be presented in a `pgwire` `StartupMessage` with a parameter name of `crdb:session_revival_token_base64`, with the value encoded in `base64`. If this parameter is present, all other authentication checks are disabled, and if the token is valid and has a valid signature, the user who originally generated the token authenticates into a new SQL session. If the token is not valid, then authentication fails. The token does not have use-once semantics, so the same token can be used any number of times to create multiple new SQL sessions within the 10 minute lifetime of the token. As such, the token should be treated as highly sensitive cryptographic information. This feature is meant to be used by multi-tenant deployments to move a SQL session from one node to another. It requires the presence of a valid `Ed25519` keypair in `tenant-signing..crt` and `tenant-signing..key`. [#75660][#75660]
-- When the `sql.telemetry.query_sampling.enabled` cluster setting is enabled, SQL names and client IPs are no longer redacted in telemetry logs. [#76676][#76676]
-
-
General changes
-
-- The following metrics were added for observability of cancellation requests made using the PostgreSQL wire protocol:
- - `sql.pgwire_cancel.total`
- - `sql.pgwire_cancel.ignored`
- - `sql.pgwire_cancel.successful`
-
- The metrics are all counters. The `ignored` counter is incremented if a cancel request was ignored due to exceeding the per-node rate limit of cancel requests. [#76457][#76457]
-- Documentation was added describing how jobs and scheduled jobs functions and are used in CockroachDB [#73995][#73995]
-
-
Enterprise edition changes
-
-- Client certificates may now be provided for the `webhook` [changefeed sink](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks). [#74645][#74645]
-- CockroachDB now redacts more potentially sensitive URI elements from changefeed job descriptions. This is a breaking change for workflows that copy URIs. As an alternative, the unredacted URI may be accessed from the jobs table directly. [#75174][#75174]
-- Changefeeds now outputs the topic names created by the Kafka sink. Furthermore, these topic names will be displayed in the [`SHOW CHANGEFEED JOBS`](https://www.cockroachlabs.com/docs/v22.1/show-jobs#show-changefeed-jobs) query. [#75223][#75223]
-- [Backup and restore](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) jobs now allow encryption/decryption with GCS KMS [#75750][#75750]
-- [Kafka sinks](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks#kafka) support larger messages, up to 2GB in size. [#76265][#76265]
-- Added support for a new SQL statement called `ALTER CHANGEFEED`, which allows users to add/drop targets for an existing changefeed. The syntax of the statement is: `{% raw %}ALTER CHANGEFEED {{ADD|DROP} }...{% endraw %}`
-
- There can be an arbitrary number of `ADD` or `DROP` commands in any order. For example:
-
- ~~~ sql
- ALTER CHANGEFEED 123 ADD foo,bar DROP baz;
- ~~~
-
- With this statement, users can avoid going through the process of altering a changefeed on their own, and rely on CockroachDB to carry out this task. [#75737][#75737]
-- Changefeeds running on tables with a low [`gc.ttlseconds`](https://www.cockroachlabs.com/docs/v22.1/configure-replication-zones#gc-ttlseconds) value now function more reliably due to protected timestamps being maintained for the changefeed targets at the resolved timestamp of the changefeed. The frequency at which the protected timestamp is updated to the resolved timestamp can be configured through the `changefeed.protect_timestamp_interval` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). If the changefeed lags too far behind such that storage of old data becomes an issue, cancelling the changefeed will release the protected timestamps and allow garbage collection to resume. If `protect_data_from_gc_on_pause` is unset, pausing the changefeed will release the existing protected timestamp record. [#76605][#76605]
-- Added support to the `ALTER CHANGEFEED` statement so that users can edit and unset the options of an existing changefeed. The syntax of this addition is the following:
-
- ~~~ sql
- ALTER CHANGEFEED SET UNSET
- ~~~
-
- [#76583][#76583]
-- Users may now alter the sink URI of an existing changefeed. This can be achieved by executing `ALTER CHANGEFEED SET sink = ''` where the sink type of the new sink must match the sink type of the old sink that was chosen at the creation of the changefeed. [#77043][#77043]
-
-
SQL language changes
-
-- `CHECK` constraints on the shard column used by [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) are no longer printed in the corresponding `SHOW CREATE TABLE`. The constraint had been shown because CockroachDB lacked logic to ensure that shard columns which are part of hash-sharded indexes always have the check constraint which the optimizer relies on to achieve properly optimized plans on hash-sharded indexes. The constraint is now implied by the `USING HASH` clause on the relevant index. [#74179][#74179]
-- The experimental command `SCRUB PHYSICAL` is no longer implemented. [#74761][#74761]
-- The [`CREATE MATERIALIZED VIEW`](https://www.cockroachlabs.com/docs/v22.1/views#materialized-views) statement now supports `WITH DATA`. [#74821][#74821]
-- CockroachDB now has a `crdb_internal.replication_stream_spec` [function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) for stream replication. [#73886][#73886]
-- CockroachDB has a new [role](https://www.cockroachlabs.com/docs/v22.1/show-roles) `VIEWACTIVITYREDACTED` introduced in v21.2.5 that is similar to `VIEWACTIVITY` but restricts the use of [statement diagnostics bundles](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#diagnostics). It is possible for a user to have both roles (`VIEWACTIVITY` and `VIEWACTIVITYREDACTED`), but `VIEWACTIVITYREDACTED` takes precedence. [#74715][#74715]
-- In v21.2.5 CockroachDB added support for the `ON CONFLICT ON CONSTRAINT` form of [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v22.1/insert#on-conflict-clause). This form is added for compatibility with PostgreSQL. It permits explicitly selecting an arbiter index for `INSERT ON CONFLICT`, rather than inferring one using a column list, which is the default behavior. [#73460][#73460]
-- [Imports](https://www.cockroachlabs.com/docs/v22.1/import) now check readability earlier for multiple files to fail more quickly if, for example, permissions are invalid. [#74863][#74863]
-- In v21.2.5 CockroachDB added new roles, `NOSQLLOGIN` and its inverse `SQLLOGIN`, which controls the SQL login ability for a user while retaining their ability to login to the [DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview) (as opposed to `NOLOGIN` which restricts both SQL and DB Console access). Without any role options all login behavior is permitted. OIDC logins to the DB Console continue to be permitted with `NOSQLLOGIN` set. [#74706][#74706]
-- Added the `default_table_access_method` [session variable](https://www.cockroachlabs.com/docs/v22.1/show-vars), which only takes in `heap`, to match the behavior of PostgreSQL. [#74774][#74774]
-- The [distributed plan diagram](https://www.cockroachlabs.com/docs/v22.1/explain-analyze#statement-plan-tree-properties) now lists scanned column names for `TableReaders`. [#75114][#75114]
-- Users can now specify the owner when [creating a database](https://www.cockroachlabs.com/docs/v22.1/create-database), similar to PostgreSQL: `CREATE DATABASE name [ [ WITH ] [ OWNER [=] user_name ]` [#74867][#74867]
-- The [`CREATE ROLE`](https://www.cockroachlabs.com/docs/v22.1/create-role) and [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v22.1/alter-role) statements now accept password hashes computed using the `scram-sha-256` method. For example: `CREATE USER foo WITH PASSWORD 'SCRAM-SHA-256$4096:B5VaT...'`. As for other types of pre-hashed passwords, this auto-detection can be disabled by changing the cluster setting `server.user_login.store_client_pre_hashed_passwords.enabled` to `false`. To ascertain whether a `scram-sha-256` password hash will be recognized, orchestration code can use the [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) `crdb_internal.check_password_hash_format()`. Follow these steps to encode the SCRAM-SHA-256 password:
- 1. Get the cleartext password string.
- 1. Generate a salt, iteration count, stored key and server key according to [RFC 5802](https://datatracker.ietf.org/doc/html/rfc5802).
- 1. Encode the hash into a format recognized by CockroachDB: the string `SCRAM-SHA-256$`, followed by the iteration count, followed by `:`, followed by the base64-encoded salt, followed by `$`, followed by the base-64 stored key, followed by `:`, followed by the base-64 server key. [#74301][#74301]
-- The session variable `password_encryption` is now exposed to SQL clients. Note that SQL clients cannot modify its value directly, it is only configurable via a cluster setting. [#74301][#74301]
-- When possible, CockroachDB will now automatically require the PostgreSQL-compatible SCRAM-SHA-256 protocol when performing password validation when SQL client login. This mechanism is not used when SQL clients use TLS client certs, which is the recommended approach. This assumes support for SCRAM-SHA-256 in client drivers. As of 2020, SCRAM-SHA-256 is prevalent in the PostgreSQL driver ecosystem. However, users should be mindful of the following possible behavior changes:
- - An application that tries to detect whether password verification has failed by checking server error messages, might observe different error messages with SCRAM-SHA-256. Those checks, if present, need to be updated.
- - If a client driver simply does not support SCRAM-SHA-256 at all, the operator retains the option to set the cluster setting `server.user_login.cert_password_method.auto_scram_promotion.enable` to `false` to force the previous password verification method instead. [#74301][#74301]
-- After a cluster upgrade, the first time a SQL client logs in using password authentication, the password will be converted to a new format (`scram-sha-256`) if it was encoded with `crdb-bcrypt` previously. This conversion will increase the latency of that initial login by a factor of ~2x, but it will be reduced again after the conversion completes. If login latency is a concern, operators should perform the password conversion ahead of time, by computing new `SCRAM` hashes for the clients via [`ALTER USER WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/alter-user) or [`ALTER ROLE WITH PASSWORD`](https://www.cockroachlabs.com/docs/v22.1/alter-role). This conversion can also be disabled via the new cluster setting `server.user_login.upgrade_bcrypt_stored_passwords_to_scram.enabled`. [#74301][#74301]
-- Statements are no longer formatted prior to being sent to the UI, but the new built-in function remains. [#75443][#75443]
-- The default SQL statistics flush interval is now 10 minutes. A new cluster setting `sql.stats.aggregatinon.interval` controls the aggregation interval of SQL stats, with a default value of 1 hour. [#74831][#74831]
-- [`SELECT`](https://www.cockroachlabs.com/docs/v22.1/selection-queries), [`INSERT`](https://www.cockroachlabs.com/docs/v22.1/insert), [`DELETE`](https://www.cockroachlabs.com/docs/v22.1/delete), and [`UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update) can no longer be granted or revoked on databases. Previously `SELECT`, `INSERT`, `DELETE`, and `UPDATE` would be converted to `ALTER DEFAULT PRIVILEGES` on `GRANT`s and were revocable. [#72665][#72665]
-- Added `pgcodes` to errors when an invalid storage parameter is passed. [#75262][#75262]
-- Implemented the [`ALTER TABLE ... SET (...)`](https://www.cockroachlabs.com/docs/v22.1/alter-table) syntax. We do not support any storage parameters yet, so this statement does not change the schema. [#75262][#75262]
-- [`SHOW GRANTS ON TABLE`](https://www.cockroachlabs.com/docs/v22.1/show-grants) now includes the `is_grantable` column [#75226][#75226]
-- Implemented the [`ALTER TABLE ... RESET (...)`](https://www.cockroachlabs.com/docs/v22.1/alter-table) syntax. This statement currently does not change the schema. [#75429][#75429]
-- S3 URIs used for [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup), [`EXPORT`](https://www.cockroachlabs.com/docs/v22.1/export), or [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/create-changefeed) can now include the query parameter `S3_STORAGE_CLASS` to configure the storage class used when that job creates objects in the designated S3 bucket. [#75588][#75588]
-- The [cost based optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) now modifies the query cost based on the `avg_size` table statistic, which may change query plans. This is controlled by the [session variable](https://www.cockroachlabs.com/docs/v22.1/set-vars) `cost_scans_with_default_col_size`, and can be disabled by setting it to `true`: `SET cost_scans_with_default_col_size=true`. [#74551][#74551]
-- The [`crdb_internal.jobs`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) table now has a new column `execution_events` which is a structured JSON form of `execution_errors`. [#75556][#75556]
-- The [privileges](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization) reported in `information_schema.schema_privileges` for non-user-defined schemas no longer are inferred from the privileges on the parent database. Instead, virtual schemas (like `pg_catalog` and `information_schema`) always report the `USAGE` privilege for the public role. The `pg_temp` schema always reports `USAGE` and `CREATE` privileges for the public role. [#75628][#75628]
-- Transaction ID to transaction fingerprint ID mapping is now stored in the new transaction ID cache, a FIFO unordered in-memory buffer. The size of the buffer is 64 MB by default and configurable via `sql.contention.txn_id_cache.max_size` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). Consequentially, two additional metrics are introduced:
- - `sql.contention.txn_id_cache.size`: the current memory usage of transaction ID cache.
- - `sql.contention.txn_id_cache.discarded_count`: the number of resolved transaction IDs that are dropped due to memory constraints. [#74115][#74115]
-- Added new [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) called `crdb_internal.revalidate_unique_constraint`, `crdb_internal.revalidate_unique_constraints_in_table`, and `crdb_internal.revalidate_unique_constraints_in_all_tables`, which can be used to revalidate existing unique constraints. The different variations support validation of a single constraint, validation of all unique constraints in a table, and validation of all unique constraints in all tables in the current database, respectively. If any constraint fails validation, the functions will return an error with a hint about which data caused the constraint violation. These violations can then be resolved manually by updating or deleting the rows in violation. This will be useful to users who think they may have been affected by issue [#73024](https://github.com/cockroachdb/cockroach/issues/73024). [#75548][#75548]
-- The [`SHOW GRANTS ON SCHEMA`](https://www.cockroachlabs.com/docs/v22.1/show-grants) statement now includes the `is_grantable` column [#75722][#75722]
-- CockroachDB now disallows [type casts](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions#explicit-type-coercions) from [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum) to [`BYTES`](https://www.cockroachlabs.com/docs/v22.1/bytes). [#75816][#75816]
-- [`EXPORT PARQUET`](https://www.cockroachlabs.com/docs/v22.1/export) has a new `compression` option whose value can be `gzip` or `snappy`. An example query:
-
- ~~~ sql
- EXPORT INTO PARQUET 'nodelocal://0/compress_snappy' WITH compression = snappy FROM SELECT * FROM foo
- ~~~
-
- By default, the Parquet file will be uncompressed. With compression, the file name will be `.parquet.gz` or `.parquet.snappy`. [#74661][#74661]
-
-- Setting a UTC timezone offset of greater than 167 or less than -167 now returns an error. For example:
-
- ~~~ sql
- SET TIME ZONE '168'
- ~~~
-
- Gives error:
-
- ~~~
- invalid value for parameter "timezone": "'168'": cannot find time zone "168": UTC timezone offset is out of range.
- ~~~
-
- ~~~ sql
- SET TIME ZONE '-168'
- ~~~
-
- Gives error:
-
- ~~~
- invalid value for parameter "timezone": "'-168'": cannot find time zone "-168": UTC timezone offset is out of range.
- ~~~
-
- [#75822][#75822]
-
-- The [`RESET ALL`](https://www.cockroachlabs.com/docs/v22.1/reset-vars) statement was added, which resets the values of all [session variables](https://www.cockroachlabs.com/docs/v22.1/show-vars#supported-variables) to their default values. [#75804][#75804]
-- The [`SHOW GRANTS ON DATABASE`](https://www.cockroachlabs.com/docs/v22.1/show-grants) statement now includes the `is_grantable` column [#75854][#75854]
-- Reordered unimplemented tables in [`pg_catalog`](https://www.cockroachlabs.com/docs/v22.1/pg-catalog) and `information_schema` to match PostgreSQL. [#75461][#75461]
-- CockroachDB will now remove incompatible database privileges to be consistent with PostgreSQL. Existing [`SELECT`](https://www.cockroachlabs.com/docs/v22.1/selection-queries), [`INSERT`](https://www.cockroachlabs.com/docs/v22.1/insert), [`UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update), and [`DELETE`](https://www.cockroachlabs.com/docs/v22.1/delete) privileges on databases will be converted to the equivalent default privileges. [#75562][#75562]
-- CockroachDB now allows users who do not have `ADMIN` privileges to use `SHOW RANGES` if the `ZONECONFIG` privilege is granted to the user. [#75551][#75551]
-- The `WITH (param=value)` syntax is now allowed for [primary key](https://www.cockroachlabs.com/docs/v22.1/primary-key) definitions, to align with PostgreSQL and to support `WITH (bucket_count=...)` syntax for [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes). [#75971][#75971]
-- CockroachDB now aliases the `idle_session_timeout` session variable with the `idle_in_session_timeout` variable to align with PostgreSQL. [#76002][#76002]
-- The `SHOW GRANTS ON TYPE` now includes the `is_grantable` column [#75957][#75957]
-- The `bucket_count` storage parameter was added. To create hash-sharded indexes, you can use the new syntax: `USING HASH WITH (bucket_count=xxx)`. The `bucket_count` storage parameter can only be used with `USING HASH`. The old `WITH BUCKET_COUNT=xxx` syntax is still supported for backward compatibility. However, you can only use the old or new syntax, but not both. An error is returned for mixed clauses: `USING HASH WITH BUCKET_COUNT=5 WITH (bucket_count=5)`. [#76068][#76068]
-- The `bulkio.backup.merge_file_buffer_size` cluster setting default value has been changed from 16MiB to 128MiB. This value determines the maximum byte size of SSTs that we buffer before forcing a flush during a backup. [#75988][#75988]
-- CockroachDB now supports for the `bucket_count` storage parameter syntax, and should be used over the old `WITH BUCKET_COUNT=xxx` syntax. With this change, CockroachDB outputs the new syntax in [`SHOW CREATE`](https://www.cockroachlabs.com/docs/v22.1/show-create) statements. [#76112][#76112]
-- CockroachDB now saves statement plan hashes or [gists](https://www.cockroachlabs.com/docs/v22.1/crdb-internal#detect-suboptimal-and-regressed-plans) to the Statements persisted stats inside the Statistics column. [#75762][#75762]
-- PostgreSQL error codes were added to the majority of [spatial functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#spatial-functions). [#76129][#76129]
-- Performing a `BACKUP` on ranges containing extremely large numbers of revisions to a single row no longer fails with errors related to exceeding the size limit. [#76254][#76254]
-- The default bucket count for hash-sharded index is 16. [#76115][#76115]
-- CockroachDB now filters out internal statements and transactions from UI timeseries metrics. [#75815][#75815]
-- [`EXPORT PARQUET`](https://www.cockroachlabs.com/docs/v22.1/export) now supports all data types that Avro changefeeds support. Below are the data type conversions from CockroachDB to Parquet. To maintain backward compatibility with older Parquet readers, Parquet converted types were also annotated. To learn about more about Parquet data representation, [see the Parquet docs](https://github.com/apache/parquet-format).
-
- CockroachDB Type Family -> Parquet Type | Parquet Logical Type | Parquet Converted Type
- --|---|--
- Bool -> boolean | nil | nil
- String -> byte array | string | string
- Collated String -> byte array | string| string
- INet -> byte array | string | string
- JSON -> byte array | json | json
- Int (oid.T_int8) -> int64 | int64 | int64
- Int (oid.T_int4 or oid.T_int2) -> int32 | int32 | int32
- Float -> float64 | nil | nil
- Decimal -> byte array | decimal (Note: scale and precision data are preserved in the parquet file) | decimal
- Uuid -> fixed length byte array (16 bytes) | uuid | no converted type
- Bytes -> byte array | nil | nil
- Bit -> byte array | nil | nil
- Enum -> byte array | Enum | Enum
- Box2d -> byte array | string | string
- Geography -> byte array | nil | nil
- Geometry -> byte array | nil | nil
- Date -> byte array | string | string
- Time -> int64 | time (note: microseconds since midnight) | time
- TimeTz -> byte array | string | string
- Interval -> byte array | string (specifically represented as ISO8601) | string
- Timestamp -> byte array | string | string
- TimestampTz -> byte array | string | string
- Array -> encoded as a repeated field and each array value gets encoded by pattern described above. | List | List
-
- [#75890][#75890]
-
-- [`SHOW CREATE TABLE`](https://www.cockroachlabs.com/docs/v22.1/show-create) no longer shows the `FAMILY` clause if there is only the `PRIMARY` family clause. [#76285][#76285]
-- CockroachDB now records the approximate time when an index was created it. This information is exposed via a new `NULL`-able `TIMESTAMP` column, `created_at`, on [`crdb_internal.table_indexes`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal). [#75753][#75753]
-- Added support for query cancellation via the `pgwire` protocol. CockroachDB will now respond to a `pgwire` cancellation by forwarding the request to the node that is running a particular query. That node will then cancel the query that is currently running in the session identified by the cancel request. The cancel request is made through the `pgwire` protocol when initializing a new connection. The client must first send 32 bits containing the integer 80877102, followed immediately by the 64-bit `BackendKeyData` message that the server sent to the client when the session was started. Most PostgreSQL drivers handle this protocol already, so there's nothing for the end-user to do apart from calling the `cancel` function that their driver offers. See the [PostgreSQL docs](https://www.postgresql.org/docs/13/protocol-flow.html#id-1.10.5.7.9) for more information. [#67501][#67501]
-- Refactored the [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup), [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup), and [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) `incremental_storage` option to `incremental_location`. [#76416][#76416]
-- Restored data now appears to have been written at the time it was restored, rather than the time at which it was backed up, when reading the lower-level write timestamps from the rows themselves. This affects various internal operations and the result of `crdb_internal_mvcc_timestamp`. [#76271][#76271]
-- The built-in functions `crdb_internal.force_panic`, `crdb_internal.force_log_fatal`, `crdb_internal.set_vmodule`, `crdb_internal.get_vmodule` are now available to all `admin` users, not just `root`. [#76518][#76518]
-- `BACKUP` of a table marked with `exclude_data_from_backup` via `ALTER TABLE ... SET (exclude_data_from_backup = true)` will no longer backup that table's row data. The backup will continue to backup the table's descriptor and related metadata, and so on restore we will end up with an empty version of the backed up table. [#75451][#75451]
-- Failed [`DROP INDEX`](https://www.cockroachlabs.com/docs/v22.1/drop-index) schema changes are no longer rolled back. Rolling back a failed `DROP INDEX` requires the index to be rebuilt, a potentially long-running, expensive operation. Further, in previous versions, such rollbacks were already incomplete as they failed to roll back cascaded drops for dependent views and foreign key constraints. [#75727][#75727]
-- Fixed a bug where when `sql.contention.txn_id_cache.max_size` was set to 0, it would effectively turn off the transaction ID cache. [#76523][#76523]
-- CockroachDB now allows users to add `NEW_KMS` encryption keys to existing backups using: `ALTER BACKUP ADD NEW_KMS = WITH OLD_KMS = ; ALTER BACKUP IN ADD NEW_KMS = WITH OLD_KMS = ` The `OLD_KMS` value must refer to at least one KMS URI that was previously used to encrypt the backup. Following successful completion of the `ALTER BACKUP`, subsequent backups, restore and show commands can use any of old or new KMS URIs to decrypt the backup. [#75900][#75900]
-- [Primary key](https://www.cockroachlabs.com/docs/v22.1/primary-key) columns which are not part of a unique secondary index (but are "implicitly" included because all indexes include all primary key columns) are now marked as `storing` in the `information_schema.statistics` table and in `SHOW INDEX`. This is technically more correct; the column is in the value in KV and not in the indexed key. [#72670][#72670]
-- A special flavor of `RESTORE`, `RESTORE SYSTEM USERS FROM ...`, was added to support restoring system users from a backup. When executed, the statement recreates those users which are in a backup of `system.users` but do not currently exist (ignoring those who do) and re-grant roles for users if the backup contains system.role_members. [#71542][#71542]
-- Added support for `DECLARE`, `FETCH`, and `CLOSE` commands for creating, using, and deleting [SQL cursors](https://www.cockroachlabs.com/docs/v22.1/cursors). [#74006][#74006]
-- [SQL cursors](https://www.cockroachlabs.com/docs/v22.1/cursors) now appear in `pg_catalog.pg_cursors`. [#74006][#74006]
-- CockroachDB now turns on support for hash-sharded indexes in implicit partitioned tables. Previously, CockroachDB blocked users from creating hash-sharded indexes in all kinds of partitioned tables including implicit partitioned tables using `PARTITION ALL BY` or `REGIONAL BY ROW`. Primary keys cannot be hash-sharded if a table is explicitly partitioned with `PARTITION BY` or an index cannot be hash-sharded if the index is explicitly partitioned with `PARTITION BY`. Partitioning columns cannot be placed explicitly as key columns of a hash-sharded index, including regional-by-row table's `crdb_region` column. [#76358][#76358]
-- When a hash-sharded index is partitioned, ranges are now pre-split within every single possible partition on shard boundaries. Each partition is split up to 16 ranges, otherwise split into the number bucket count ranges. Note that, only the list partition is being pre-split. CockroachDB doesn't pre-split range partitions. [#76358][#76358]
-- New user privileges were added: `VIEWCLUSTERSETTING` and `NOVIEWCLUSTERSETTING` that controls whether users can view cluster settings only. [#76012][#76012]
-- Several error cases in geospatial and other built-in functions now return more appropriate error codes. [#76458][#76458]
-- [Expression indexes](https://www.cockroachlabs.com/docs/v22.1/expression-indexes) can no longer have duplicate expressions. [#76863][#76863]
-- The `crdb_internal.serialize_session` and `crdb_internal.deserialize_session` functions now handle prepared statements. When deserializing, any prepared statements that existed when the session was serialized are re-prepared. Re-preparing a statement if the current session already has a statement with that name throws an error. [#76399][#76399]
-- The `experimental_enable_hash_sharded_indexes` session variable was removed, along with the corresponding cluster setting. The ability to create hash-sharded indexes is enabled automatically. SQL statements that refer to the setting will still work but will have no effect. [#76937][#76937]
-- Added the session variable `default_transaction_quality_of_service` which controls the priority of work submitted to the different [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control) queues on behalf of SQL requests submitted in a session. Admission control must be enabled for this setting to have an effect. To increase admission control priority of subsequent SQL requests:
-
- ~~~ sql
- SET default_transaction_quality_of_service=critical;
- ~~~
-
- To decrease admission control priority of subsequent SQL requests:
-
- ~~~ sql
- SET default_transaction_quality_of_service=background;
- ~~~
-
- To reset admission control priority to the default session setting (in between background and critical):
-
- ~~~ sql
- SET default_transaction_quality_of_service=regular;
- ~~~
-
- [#76512][#76512]
-- CockroachDB now limits the bucket count in [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) to an inclusive range of [2, 2048]. Previously we only required the bucket count a positive Int32 integer (greater than 1). [#77004][#77004]
-- Added support for distributed import queries in multi-tenant environments, which allows import queries to have improved parallelism by utilizing all available SQL pods in the tenant. [#76566][#76566]
-- The `ST_Box2DFromGeoHash` function now accepts `NULL` arguments. If the precision is `NULL`, it is equivalent to no precision being passed in. Upper-case characters are now parsed as lower-case characters for `geohash`, matching PostGIS behavior. [#76990][#76990]
-- CockroachDB now supports the `SHOW COMPLETIONS AT OFFSET FOR ` syntax that returns a set of SQL keywords that can complete the keyword at `` in the given ``. If the offset is in the middle of a word, then it returns the full word. For example `SHOW COMPLETIONS AT OFFSET 1 FOR "SELECT"` returns `select`. [#72925][#72925]
-- A new row level TTL was added to CockroachDB, which is available as a beta feature. This allows users to use a special syntax to automatically mark rows for deletion. Rows are deleted using a `SCHEDULED JOB`.
-
- A user can create a table with TTL using:
-
- ~~~ sql
- CREATE TABLE t (id INT PRIMARY KEY) WITH (ttl_expire_after = '10 mins')
- ~~~
-
- Where `ttl_expire_after` is a [duration expression](https://www.cockroachlabs.com/docs/v22.1/interval). A user can also add TTL to an existing table using:
-
- ~~~ sql
- ALTER TABLE t SET (ttl_expire_after = '10 mins')
- ~~~
-
- This creates a new column, `crdb_internal_expiration`, which automatically is set to `now() + ttl_expire_after` when inserted by default or on update. The scheduled job will delete any rows which exceed this timestamp as of the beginning of the job run. The TTL job is configurable in a few ways using the `WITH`/`SET` syntax:
-
- - `ttl_select_batch_size`: how many rows to select at once (default is cluster setting `sql.ttl.default_select_batch_size`)
- - `ttl_delete_batch_size`: how many rows to delete at once (default is cluster setting `sql.ttl.default_select_batch_size`)
- - `ttl_delete_rate_limit`: maximum rows to delete per second for the given table (default is cluster setting `sql.default.default_delete_rate_limit`)
- - `ttl_pause`: pauses the TTL job (also globally pausable with `sql.ttl.job.enabled`).
-
- Using `ALTER TABLE table_name RESET ()` will reset the parameter to re-use the default, or `RESET(ttl)` will disable the TTL job for the table and remove the `crdb_internal_expiration` column. [#76918][#76918]
-
-- Added the cluster setting `sql.contention.event_store.capacity`. This cluster setting can be used to control the in-memory capacity of the contention event store. When this setting is set to zero, the contention event store is disabled. [#76719][#76719]
-- When dropping a user that has default privileges, the error message now includes which database and schema in which the default privileges are defined. Additionally a hint is given to show exactly how to remove the default privileges. For example:
-
- ~~~
- pq: role testuser4 cannot be dropped because some objects depend on it owner of default privileges on new sequences belonging to role testuser4 in database testdb2 in schema s privileges for default privileges on new sequences belonging to role testuser3 in database testdb2 in schema s privileges for default privileges on new sequences for all roles in database testdb2 in schema public privileges for default privileges on new sequences for all roles in database testdb2 in schema s HINT: USE testdb2; ALTER DEFAULT PRIVILEGES FOR ROLE testuser4 IN SCHEMA S REVOKE ALL ON SEQUENCES FROM testuser3; USE testdb2; ALTER DEFAULT PRIVILEGES FOR ROLE testuser3 IN SCHEMA S REVOKE ALL ON SEQUENCES FROM testuser4; USE testdb2; ALTER DEFAULT PRIVILEGES FOR ALL ROLES IN SCHEMA PUBLIC REVOKE ALL ON SEQUENCES FROM testuser4; USE testdb2; ALTER DEFAULT PRIVILEGES FOR ALL ROLES IN SCHEMA S REVOKE ALL ON SEQUENCES FROM testuser4;
- ~~~
-
- [#77016][#77016]
-- Added support for distributed backups in a multitenant environment that uses all available SQL pods in the tenant. [#77023][#77023]
-
-
Operational changes
-
-- Sending a CockroachDB process, including one running a client command, a `SIGUSR2` signal now causes it to open an HTTP port that serve the basic Go performance inspection endpoints for use with `pprof`. [#75678][#75678]
-- Operators who wish to access HTTP endpoints of the cluster through a proxy can now request specific `nodeID`s through a `remote_node_id` query parameter or cookie with the value set to the `nodeID` to which they would like to proxy the connection. [#72659][#72659]
-- Added the `admission.epoch_lifo.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings), disabled by default, which enables the use of epoch-LIFO adaptive queueing behavior in [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control). [#71882][#71882]
-- Added the cluster setting `bulkio.backup.resolve_destination_in_job.enabled` which can be used to delay resolution of backup's destination until the job starts running. [#76670][#76670]
-- A `server.max_connections` cluster setting was added to limit the maximum number of connections to a server. It is disabled by default. [#76401][#76401]
-- `BACKUP` now resolves incremental backup destinations during the job's execution phase rather than while it is being created to reduce contention on the `system.jobs` table. The `bulkio.backup.resolve_destination_in_job.enabled` cluster setting that enabled this functionality in some v21.2 patch releases was removed. [#76853][#76853]
-- Added the cluster setting `kv.raft_log.loosely_coupled_truncation.enabled` which can be used to disable loosely coupled truncation. [#76215][#76215]
-- `RESTORE` now runs at a higher parallelism by default to improve performance. [#76907][#76907]
-- Added the `admission.epoch_lifo.epoch_duration`, `admission.epoch_lifo.epoch_closing_delta_duration`, `admission.epoch_lifo.queue_delay_threshold_to_switch_to_lifo` cluster settings for configuring epoch-LIFO queueing in admission control. [#76951][#76951]
-
-
Command-line changes
-
-- Fixed the [CLI help](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) text for `ALTER DATABASE` to show correct options for `ADD REGION` and `DROP REGION`, and include some missing options such as `CONFIGURE ZONE`. [#74929][#74929]
-- If graceful drain range lease transfer encounters issues, verbose logging is now automatically enabled to help with troubleshooting. [#68488][#68488]
-- All [`cockroach` commands](https://www.cockroachlabs.com/docs/v22.1/cockroach-commands) now log their stack but do not exit when sent a `SIGQUIT` signal. This behavior is consistent with the behavior of `cockroach start`. [#75678][#75678]
-- The [`debug zip`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-zip) utility now also scrapes the cluster-wide KV replication reports in the output. [#75239][#75239]
-- The flag `--self` of the [`cockroach node decommission` command](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) is deprecated. Instead, operators should specify the node ID of the target node as an explicit argument. The node that the command is connected to should not be a target node. [#74319][#74319]
-- Added a new optional `version` argument to the `doctor examine` command. This can be used to enable or disable validation when examining older ZIP directories. [#76166][#76166]
-- The `debug zip` CLI command now supports exporting `system` and `crdb_internal` tables to a ZIP folder for tenants. [#75572][#75572]
-- Added instructions to an error message when initializing `debug tsdump`. [#75880][#75880]
-- `cockroach sql` (and [`demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo)) now continue to accept user input when Ctrl+C is pressed at the interactive prompt and the current input line is empty. Previously, it would terminate the shell. To terminate the shell, the client-side command `\q` is still supported. The user can also terminate the input altogether via `EOF` (Ctrl+D). The behavior for non-interactive use remains unchanged. [#76427][#76427]
-- The interactive SQL shell (`cockroach sql`, `cockroach demo`) now supports interrupting a currently running query with Ctrl+C, without losing access to the shell. [#76437][#76437]
-- Added a new CLI flag `--max-tsdb-memory` used to set the memory budget for timeseries queries when processing requests from the [**Metrics** page in the DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard). Most users should not need to change this setting as the default of 1% of system memory or 64 MiB, whichever is greater, is adequate for most deployments. In cases where a deployment of hundreds of nodes has low per-node memory available (for example, below 8 GiB) it may be necessary to increase this value to `2%` or higher in order to render time series graphs for the cluster using the DB Console. Otherwise, use the default settings. [#74662][#74662]
-- Node drains now ensure that SQL statistics are not lost during the process, but are now preserved in the statement statistics system table. [#76397][#76397]
-- The CLI now auto completes on tab by using `SHOW COMPLETIONS AT OFFSET`. [#72925][#72925]
-
-
API endpoint changes
-
-- The `/_status/load` endpoint, which delivers an instant measurement of CPU load, is now available for regular CockroachDB nodes and not just multitenant SQL-only servers. [#75852][#75852]
-- The `StatusClient` interface has been extended with a new request called `NodesListRequest`. This request returns a list of KV nodes for KV servers and SQL nodes for SQL only servers with their corresponding SQL and RPC addresses. [#75572][#75572]
-- Users with the `VIEWACTIVITYREDACTED` [role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization) will not have access to the full queries with constants in the `ListSessions` response. [#76675][#76675]
-
-
DB Console changes
-
-- Removed `$ internal` as one of the apps options under the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) page filters. [#75470][#75470]
-- Removed formatting of statements on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page), [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page#transaction-details-page), and **Index** details pages. [#75443][#75443]
-- Changed the order of tabs under the **SQL Activity** page to be **Statements**, **Transactions**, and [**Sessions**](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page). [#75490][#75490]
-- The logical plan text is now included in searchable text in the **Statements** page. [#75097][#75097]
-- If the user has the role `VIEWACTIVITYREDACTED`, we now hide the Statement Diagnostics bundle info on **Statements** page (diagnostics column), **Statement Details** page (diagnostics tab) and [**Advanced Debug**](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages) page (diagnostics history). [#75274][#75274]
-- Loading and error pages are now below page config on the **Transactions** and **Statements** pages. This was introduced in CockroachDB v21.2.5. [#75458][#75458]
-- Added `Circuit Breaker` graphs on the **Replication Dashboard** in the DB Console. This was introduced in CockroachDB v21.2.5. [#75613][#75613]
-- Added an option to cancel a running request for statement diagnostics. [#75733][#75733]
-- DB Console requests can now be routed to arbitrary nodes in the cluster. Users can select a node from a dropdown in the **Advanced Debug** page of the DB Console to route their UI to that node. Manually initiated requests can either add a `remote_node_id` query parameter to their request or set a `remote_node_id` HTTP cookie in order to manage the routing of their request. [#72659][#72659]
-- We no longer show information about aggregation timestamps on the **Statements** and **Statement Details** pages, since now all the statement fingerprints are grouped inside the same time selection. [#76301][#76301]
-- Added the status of automatic statistics collection to the [**Database**](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page) and Database table pages in the DB Console.
-- Added the timestamp of the last statistics collection to the **Database** details and **Database** table pages in the DB Console. [#76168][#76168]
-- Open SQL Transactions and Active SQL Transactions are now downsampled using `MAX` instead of `AVG` and will more accurately reflect narrow spikes in transaction counts when looking at downsampled data. [#76348][#76348]
-- Display circuit breakers in problems ranges and range status. [#75809][#75809]
-- A **Now** button was added to the **Statements** and **Transactions** pages. The **Reset time** link was replaced by the **Now** button. [#76691][#76691]
-- Changed `invalid lease` to `expired lease` on the Problem Ranges section of the **Advanced Debug** page [#76757][#76757]
-- Added column selector, filters, and new columns to the **Sessions** and **Sessions Details** pages. [#75965][#75965]
-- Added long loading messages to the [**SQL Activity**](https://www.cockroachlabs.com/docs/v22.1/ui-sql-dashboard) pages. [#76739][#76739]
-
-
Bug fixes
-
-- Fixed possible panics in some distributed queries using `ENUM`s in [join predicates](https://www.cockroachlabs.com/docs/v22.1/joins). [#74659][#74659]
-- Fixed a bug that could previously cause redundant lease transfers. [#74726][#74726]
-- Fixed a bug where deleting data in schema changes (for example, when dropping an index or table) could fail with a `command too large` error. [#74674][#74674]
-- Fixed a bug where CockroachDB could encounter an internal error when performing [`UPSERT`](https://www.cockroachlabs.com/docs/v22.1/upsert) or [`INSERT ... ON CONFLICT`](https://www.cockroachlabs.com/docs/v22.1/insert#on-conflict-clause) queries in some cases when the new rows contained `NULL` values (either `NULL`s explicitly specified or `NULL`s used since some columns were omitted). [#74825][#74825]
-- Fixed a bug where the scale of a [`DECIMAL`](https://www.cockroachlabs.com/docs/v22.1/decimal) column was not enforced when values specified in scientific notation (for example, `6e3`) were inserted into the column. [#74869][#74869]
-- Fixed a bug where certain malformed [backup schedule expressions](https://www.cockroachlabs.com/docs/v22.1/manage-a-backup-schedule) caused the node to crash. [#74881][#74881]
-- Fixed a bug where a [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) job could hang if it encountered an error when ingesting restored data. [#74905][#74905]
-- Fixed a bug which caused errors in rare cases when trying to divide `INTERVAL` values by `INT4` or `INT2` values. [#74882][#74882]
-- Fixed a bug that could occur when a [`TIMETZ`](https://www.cockroachlabs.com/docs/v22.1/time) column was indexed, and a query predicate constrained that column using a `<` or `>` operator with a `TIMETZ` constant. If the column contained values with time zones that did not match the time zone of the `TIMETZ` constant, it was possible that not all matching values could be returned by the query. Specifically, the results may not have included values within one microsecond of the predicate's absolute time. This bug was introduced when the `TIMETZ` datatype was first added in v20.1. It exists in all versions of v20.1, v20.2, v21.1, and v21.2 prior to this patch. [#74914][#74914]
-- Fixed an internal error, `estimated row count must be non-zero`, that could occur during planning for queries over a table with a `TIMETZ` column. This error was due to a faulty assumption in the statistics estimation code about ordering of `TIMETZ` values, which has now been fixed. The error could occur when `TIMETZ` values used in the query had a different time zone offset than the `TIMETZ` values stored in the table. [#74914][#74914]
-- The `--user` argument is no longer ignored when using [`cockroach sql`](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) in `--insecure` mode. [#75194][#75194]
-- Fixed a bug where CockroachDB could incorrectly report the `KV bytes read` statistic in [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze) output. The bug is present only in v21.2 versions. [#75175][#75175]
-- Fixed a bug that caused internal errors in queries with set operations, like [`UNION`](https://www.cockroachlabs.com/docs/v22.1/selection-queries#union-combine-two-queries), when corresponding columns on either side of the set operation were not the same. This error only occurred with a limited set of types. This bug is present in v20.2.6+, v21.1.0+, and v21.2.0+. [#75219][#75219]
-- Fixed a bug where [`CREATE INDEX`](https://www.cockroachlabs.com/docs/v22.1/create-index) statements using expressions failed in some cases if they encountered an internal retry. [#75056][#75056]
-- Fixed a bug when creating [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) on existing tables, traffic would hit primarily on the single range of the index before it was split into more ranges for shards as the range size grew. This change makes schema changes able to presplit ranges on shard boundaries before the index becomes writable. Added the `sql.hash_sharded_range_pre_split.max` cluster setting which allows users to set the upper boundary of the amount of ranges. If the bucket count of the defined index is less than the cluster setting, the bucket count will be the amount of pre-split ranges. [#74923][#74923]
-- Updated the `String()` function of `roleOption` to add a space on the role `VALID UNTIL`. [#75271][#75271]
-- Fixed a bug where **SQL Activity** pages crashed when a column was sorted the 3rd time. [#75473][#75473]
-- Fixed a bug where if multiple columns were added to a table inside a transaction, then none of the columns would be backfilled if the last column did not require a backfill. [#75076][#75076]
-- Fixed a bug where in some cases queries that involved a scan which returned many results and which included lookups for individual keys were not returning all results from the table. [#75475][#75475]
-- Fixed a bug where dropping and creating a [primary index](https://www.cockroachlabs.com/docs/v22.1/primary-key) constraint with the same name in a transaction would incorrectly fail. [#75155][#75155]
-- `crdb_internal.deserialize_session` now checks if the `session_user` has the privilege to `SET ROLE` to the `current_user` before changing the session settings. [#75575][#75575]
-- [Dedicated clusters](https://www.cockroachlabs.com/docs/cockroachcloud/create-your-cluster) can now restore tables and databases from backups made by tenants. [#73647][#73647]
-- Fixed a bug that caused high SQL tail latencies during background [rebalancing](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer) in the cluster. [#73697][#73697]
-- Fixed a bug when tables or columns were dropped that contained [sequences](https://www.cockroachlabs.com/docs/v22.1/create-sequence), where the sequence remained even when the owner table or column did not exist. A sequence is created when a column is defined as a `SERIAL` type and the `serial_normalization` session variable is set to `sql_sequence`. In this case, the sequence is owned by the column and the table where the column exists. The sequence should be dropped when the owner table or column is dropped, which is the PostgreSQL behavior. CockroachDB now assigns correct ownership information to the sequence descriptor and column descriptor so that CockroachDB aligns with PostgreSQL. [#74840][#74840]
-- Fixed a bug where the `options` query parameter was removed when using the `\c` command in the [SQL shell](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) to reconnect to the cluster. [#75673][#75673]
-- [`cockroach node decommission`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) no longer causes query failure due to the decommissioning node not closing open SQL connections and still being marked as ready. The decommissioning process now includes a draining step that fixes this. In other words, a decommission now automatically drains a node. This also means that running a drain after a decommission is no longer necessary. It is optional, but recommended, that `cockroach node drain` is used before `cockroach node decommission` to avoid the possibility of a disturbance in query performance. [#74319][#74319]
-- The `CancelSession` endpoint now correctly propagates gateway metadata when forwarding requests. [#75814][#75814]
-- Fixed a bug which could cause nodes to crash when truncating abnormally large Raft logs. [#75793][#75793]
-- Fixed a bug that caused incorrect values to be written to [computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) when their expressions were of the form `j->x = y`, where `j` is a [`JSON`](https://www.cockroachlabs.com/docs/v22.1/jsonb) column and `x` and `y` are constants. This bug also caused corruption of [partial indexes](https://www.cockroachlabs.com/docs/v22.1/partial-indexes) with `WHERE` clauses containing expressions of the same form. This bug was present since version v2.0. [#75914][#75914]
-- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks) retry instead of fail on RPC send failure. [#75517][#75517]
-- Fixed a rare race condition that could lead to client-visible errors like `found ABORTED record for implicitly committed transaction`. These errors were harmless in that they did not indicate data corruption, but they could be disruptive to clients. [#75601][#75601]
-- Fixed a bug where swapping primary keys could lead to scenarios where [foreign key references](https://www.cockroachlabs.com/docs/v22.1/foreign-key) could lose their uniqueness. [#75820][#75820]
-- Fixed a bug where [`CASE` expressions](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions#conditional-expressions) with branches that result in types that cannot be cast to a common type caused internal errors. They now result in a user-facing error. [#76193][#76193]
-- Fixed a bug that caused internal errors when querying tables with [virtual columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) in the primary key. This bug was only present since version v22.1.0-alpha.1 and does not appear in any production releases. [#75898][#75898]
-- The DB console [**Databases**](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page) page now shows stable, consistent values for database sizes. [#76315][#76315]
-- Fixed a bug where comments were not cleaned up when the table primary keys were swapped, which could cause [`SHOW TABLE`](https://www.cockroachlabs.com/docs/v22.1/show-tables) to fail. [#76277][#76277]
-- Fixed a bug where some of the [`cockroach node`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) subcommands did not handle `--timeout` properly. [#76427][#76427]
-- Fixed a bug which caused the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) to omit join filters in rare cases when reordering joins, which could result in incorrect query results. This bug was present since v20.2. [#76334][#76334]
-- Fixed a bug where the list of recently decommissioned nodes and the historical list of decommissioned nodes incorrectly display decommissioned nodes. [#76538][#76538]
-- Fixed a bug where CockroachDB could incorrectly not return a row from a table with multiple column families when that row contains a `NULL` value when a composite type ([`FLOAT`](https://www.cockroachlabs.com/docs/v22.1/float), [`DECIMAL`](https://www.cockroachlabs.com/docs/v22.1/decimal), [`COLLATED STRING`](https://www.cockroachlabs.com/docs/v22.1/collate), or an array of these types) is included in the `PRIMARY KEY`. [#76563][#76563]
-- There is now a 1 hour timeout when sending [Raft](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#raft) snapshots, to avoid stalled snapshot transfers preventing Raft log truncation and growing the Raft log very large. This is configurable via the `COCKROACH_RAFT_SEND_SNAPSHOT_TIMEOUT` environment variable. [#76589][#76589]
-- Fixed an error that could sometimes occur when sorting the output of the [`SHOW CREATE ALL TABLES`](https://www.cockroachlabs.com/docs/v22.1/show-create) statement. [#76639][#76639]
-- Fixed a bug where [backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) incorrectly backed up database, schema, and type descriptors that were in a `DROP` state at the time the backup was run. This bug resulted in the user being unable to backup and restore if their cluster had dropped and public descriptors with colliding names. [#76635][#76635]
-- Fixed a race condition that in rare circumstances could cause a node to panic with `unexpected Stopped processor` during shutdown. [#76825][#76825]
-- Fixed a bug where the different stages of preparing, binding, and executing a prepared statement would use different implicit transactions. Now these stages all share the same implicit transaction. [#76792][#76792]
-- Attempting to run concurrent profiles now works up to a concurrency limit of two. This will remove the occurrence of `profile id not found` errors while running up to two profiles concurrently. When a profile is not found, the error message has been updated to suggest remediation steps in order to unblock the user. [#76266][#76266]
-- The content type header for the HTTP log sink is now set to `application/json` if the format of the log output is `JSON`. [#77014][#77014]
-- Fixed a bug that could corrupt indexes containing [virtual columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) or [expressions](https://www.cockroachlabs.com/docs/v22.1/expression-indexes). The bug only occurred when the index's table had a foreign key reference to another table with an `ON DELETE CASCADE` action, and a row was deleted in the referenced table. This bug was present since virtual columns were added in version v21.1.0. [#77052][#77052]
-- Fixed a bug where CockroachDB could crash when running a `SQL PREPARE` using the PostgreSQL extended protocol. [#77063][#77063]
-- Fixed a bug where running SQL-level `EXECUTE` using the PostgreSQL extended protocol had inconsistent behavior and could in some cases crash the server. [#77063][#77063]
-- The `crdb_internal.node_inflight_trace_spans` virtual table will now present traces for all operations ongoing on the respective node. Previously, the table would reflect a small percentage of ongoing operations unless tracing was explicitly enabled. [#76403][#76403]
-- The default value of `kv.rangefeed_concurrent_catchup_iterators` was lowered to 16 to help avoid overload during `CHANGEFEED` restarts. [#75851][#75851]
-
-
Performance improvements
-
-- The memory representation of [`DECIMAL`](https://www.cockroachlabs.com/docs/v22.1/decimal) datums has been optimized to save space, avoid heap allocations, and eliminate indirection. This increases the speed of `DECIMAL` arithmetic and aggregation by up to 20% on large data sets. [#74590][#74590]
-- `RESTORE` operations in [Serverless clusters](https://www.cockroachlabs.com/docs/cockroachcloud/create-a-serverless-cluster) now explicitly ask the host cluster to distribute data more evenly. [#75105][#75105]
-- `IMPORT`, `CREATE`, `INDEX`, and other [bulk ingestion jobs](https://www.cockroachlabs.com/docs/cockroachcloud/take-and-restore-self-managed-backups) run on Serverless clusters now collaborate with the host cluster to spread ingested data more during ingest. [#75105][#75105]
-- The `covar_pop` [aggregate function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#aggregate-functions) is now evaluated more efficiently in a distributed setting. [#73062][#73062]
-- Queries using [`NOT expr`](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions) syntax can now be evaluated faster in some cases. [#75058][#75058]
-- The `regr_sxx`, `regr_sxy`, `regr_syy` aggregate functions are now evaluated more efficiently in a distributed setting. [#75619][#75619]
-- Transaction read refresh operations performed during optimistic concurrency control's validation phase now use a time-bound file filter when scanning the LSM tree. This allows these operations to avoid scanning files that contain no keys written since the transaction originally performed its reads. [#74628][#74628]
-- A set of bugs that rendered Queries-Per-Second (QPS) based lease and replica rebalancing in v21.2 and earlier ineffective under heterogenously loaded cluster localities has been fixed. Additionally a limitation which prevent CockroachDB from effectively alleviating extreme QPS hotspots from nodes has also been fixed. [#72296][#72296]
-- The [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) better optimizes queries that include both foreign key joins and self-joins. [#75582][#75582]
-- A `LIMIT` can now be pushed below a foreign key join or self-join in more cases, which may result in more efficient query plans. [#75582][#75582]
-- The performance of many `DECIMAL` arithmetic operators has been improved by as much as 60%. These operators include division (`/`), `sqrt`, `cbrt`, `exp`, `ln`, `log`, and `pow`. [#75770][#75770]
-- Stores will retry requests that are directed at the incorrect range, most commonly following a recent range split. This patch has the effect of reducing tail latency following range splits. [#75446][#75446]
-- The optimizer can now generate lookup joins in certain cases for non-covering indexes, when performing a left outer/semi/anti join. [#58261][#58261]
-- The optimizer now plans inner lookup joins using expression indexes in more cases, resulting in more efficient query plans. [#76078][#76078]
-- Certain forms of automatically retried `read uncertainty` errors are now retried more efficiently, avoiding a network round trip. [#75905][#75905]
-- The `regr_avgx`, `regr_avgy`, `regr_intercept`, `regr_r2`, and `regr_slope` aggregate functions are now evaluated more efficiently in a distributed setting. [#76007][#76007]
-- `IMPORT`s and index backfills should now do a better job of spreading their load out over the nodes in the cluster. [#75894][#75894]
-- Fixed a bug in the histogram estimation code that could cause the optimizer to think a scan of a multi-column index would produce 0 rows, when in fact it would produce many rows. This could cause the optimizer to choose a suboptimal plan. It is now less likely for the optimizer to choose a suboptimal plan when multiple multi-column indexes are available. [#76486][#76486]
-- Added the `kv.replica_stats.addsst_request_size_factor` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). This setting is used to tune Queries-Per-Second (QPS) sensitivity to large imports. By default, this setting is disabled. When enabled, the size of any `AddSSTableRequest` will contribute to QPS in inverse relation to this settings magnitude. By default this setting is configured to a conservative 50,000; every 50 kilobytes will be accounted for as an additional 1 QPS. [#76252][#76252]
-- Queries with a [`LIMIT` clause](https://www.cockroachlabs.com/docs/v22.1/limit-offset) applied against a single table, either explicitly written, or implicit such as in an uncorrelated `EXISTS` subquery, now scan that table with improved latency if the table is defined with `LOCALITY REGIONAL BY ROW` and the number of qualified rows residing in the local region is less than or equal to the hard limit (sum of the `LIMIT` clause and optional `OFFSET` clause values). This optimization is only applied if the hard limit is 100000 or less. [#75431][#75431]
-- Fixed a limitation where upon adding a new node to the cluster, lease counts among existing nodes could diverge until the new node was fully up-replicated. [#74077][#74077]
-- The optimizer now attempts to plan lookup joins on indexes that include computed columns in more cases, which may improve query plans. [#76817][#76817]
-- The optimizer produces more efficient query plans for `INSERT .. ON CONFLICT` statements that do not have explicit conflict columns or constraints and are performed on partitioned tables. [#76961][#76961]
-- The `corr`, `covar_samp`, `sqrdiff`, and `regr_count` aggregate functions are now evaluated more efficiently in a distributed setting [#76754][#76754]
-- The jobs scheduler now runs on a single node by default in order to reduce contention on the scheduled jobs table. [#73319][#73319]
-
-
Build changes
-
-- Upgrade to Go 1.17.6 [#74655][#74655]
-
-
-
-
Contributors
-
-This release includes 866 merged PRs by 89 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Max Neverov
-- RajivTS (first-time contributor)
-- Ulf Adams
-- e-mbrown
-- llllash (first-time contributor)
-- shralex
-
-
-
-- Altering the sink type of a [changefeed](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks) is now disallowed. An attempt to change a sink type now returns an error message recommending that you create a new sink type. [#77152][#77152]
-- Currently executing [schedules](https://www.cockroachlabs.com/docs/v22.1/manage-a-backup-schedule) are cancelled immediately when the jobs scheduler is disabled. [#77306][#77306]
-- The `changefeed.backfill_pending_ranges` [Prometheus metric](https://www.cockroachlabs.com/docs/v22.1/monitoring-and-alerting#prometheus-endpoint) was added to track ongoing backfill progress of a changefeed. [#76995][#76995]
-- Changefeeds can now be created on tables with more than one [column family](https://www.cockroachlabs.com/docs/v22.1/column-families). Previously, this would error. Now, we create a feed that will emit individual messages per column family. Primary key columns will appear in the key for all column families, but in the value only in the families they are in. For example, if a table foo has families `primary` containing the primary key and a string column, and `secondary` containing a different string column, you'll see two messages for an insert that will look like `0 -> {id: 0, s1: "val1"}, 0 -> {s2: "val2"}`. If an update then only affects one family, you'll see only one message (e.g., `0 -> {s2: "newval"})`. This behavior reflects CockroachDB internal treatment of column families: writes are processed and stored separately, with only the ordering and atomicity guarantees that would apply to updates to two different tables within a single transaction. Avro schema names will include the family name concatenated to the table name. If you don't specify family names in the `CREATE` or `ALTER TABLE` statement, the default family names will either be `primary` or of the form `fam__`. [#77084][#77084]
-
-
SQL language changes
-
-- Introduced the `crdb_internal.transaction_contention_events` virtual table, that exposes historical transaction contention events. The events exposed in the new virtual table also include transaction fingerprint IDs for both blocking and waiting transactions. This allows the new virtual table to be joined into statement statistics and transaction statistics tables. The new virtual table requires either the `VIEWACTIVITYREDACTED` or `VIEWACTIVITY` [role option](https://www.cockroachlabs.com/docs/v22.1/alter-role#role-options) to access. However, if the user has the `VIEWACTIVTYREDACTED` role, the contending key will be redacted. The contention events are stored in memory. The number of contention events stored is controlled via `sql.contention.event_store.capacity` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). [#76917][#76917]
-- Initial implementation of a scheduled logger used to capture index usage statistics to the [telemetry logging channel](https://www.cockroachlabs.com/docs/v22.1/logging#telemetry). [#76886][#76886]
-- Added the ability for the TTL job to generate statistics on number of rows and number of expired rows on the table. This is off by default, controllable by the `ttl_row_stats_poll_interval` [storage parameter](https://www.cockroachlabs.com/docs/v22.1/sql-grammar#opt_with_storage_parameter_list). [#76837][#76837]
-- Return ambiguous [unary operator error](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#operators) for ambiguous input like `~'1'` which can be interpreted as an integer (resulting in `-2`) or a bit string (resulting in `0`). [#76943][#76943]
-- [`crdb_internal.default_privileges`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) no longer incorrectly shows default privileges for databases where the default privilege was not actually defined. [#77255][#77255]
-- You can now create core [changefeeds](https://www.cockroachlabs.com/docs/v22.1/changefeeds-on-tables-with-column-families) on tables with more than one [column family](https://www.cockroachlabs.com/docs/v22.1/column-families). CockroachDB creates a feed that will emit individual messages per column family. Primary key columns will appear in the key for all column families, but in the value only in the families they are in. For example, if a table `foo` has families `primary` containing the primary key and a string column, and `secondary` containing a different string column, you'll see two messages for an insert that will look like `0 -> {id: 0, s1: "val1"}, 0 -> {s2: "val2"}`. If an update then only affects one family, you'll see only one message (e.g., 0 -> `{s2: "newval"}`). This behavior reflects CockroachDB internal treatment of column families: writes are processed and stored separately, with only the ordering and atomicity guarantees that would apply to updates to two different tables within a single transaction. [#77084][#77084]
-- A new [built-in scalar function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) `crdb_internal.active_version()` can now be used alongside `crdb_internal.is_at_least_version()` to determine which cluster version is currently active and choose client-side feature levels accordingly. [#77233][#77233]
-- [`IMPORT INTO with AVRO`](https://www.cockroachlabs.com/docs/v22.1/import-into) now supports Avro files with the following Avro types: `long.time-micros`, `int.time-millis`, `long.timestamp-micros`,`long.timestamp-millis`, and `int.date`. This feature works only if the user has created a CockroachDB table with column types with match certain Avro type: `AVRO | CRDB | TIME | TIMESTAMP | DATE` [#76989][#76989]
-
-
DB Console changes
-
-- [DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview) now displays locality information in [problem ranges](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages#reports) and [range status](https://www.cockroachlabs.com/docs/v22.1/ui-replication-dashboard#ranges). [#76892][#76892]
-- DB Console now displays `is_leaseholder` and `lease_valid` information in problem ranges and range status pages. [#76892][#76892]
-- Added the Hot Ranges page and linked to it on the sidebar. [#77330][#77330]
-- Removed stray parenthesis at the end of the duration time for a successful job. [#77438][#77438]
-
-
Bug fixes
-
-- Previously, a bug caused the Open Transaction chart in the [Metrics Page](https://www.cockroachlabs.com/docs/v22.1/ui-overview#metrics) to constantly increase for empty transactions. This issue has now been fixed. [#77237][#77237]
-- Previously, [draining nodes](https://www.cockroachlabs.com/docs/v22.1/node-shutdown#draining) in a cluster without shutting them down could stall foreground traffic in the cluster. This patch fixes this bug. [#77246][#77246]
-
-
Performance improvements
-
-- Queries of the form `SELECT * FROM t1 WHERE filter_expression ORDER BY secondIndexColumn LIMIT n;` where there is a `NOT NULL CHECK` constraint of the form: `CHECK (firstIndexColumn IN (const_1, const_2, const_3...)` can now be rewritten as a `UNION ALL skip scan` to avoid the previously-required sort operation. [#76893][#76893]
-
-
-
-- Clusters can be configured to send HSTS headers with HTTP requests in order to enable browser-level enforcement of HTTPS for the cluster host. This is controlled by setting the `server.hsts.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to `true` (default: `false`). Once the headers are present, after an initial request, browsers will force HTTPS on all subsequent connections to the host. This reduces the possibility of man-in-the-middle (MitM) attacks, which HTTP-to-HTTPS redirects are vulnerable to. [#77244][#77244]
-
-
Enterprise edition changes
-
-- Added a `created` time column to the `crdb_internal.active_range_feeds` virtual table to improve observability and debuggability of the rangefeed system. [#77597][#77597]
-- [Incremental backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups#incremental-backups) created by [`BACKUP ... INTO`](https://www.cockroachlabs.com/docs/v22.1/backup) or [`BACKUP ... TO`](https://www.cockroachlabs.com/docs/v22.1/backup) are now stored by default under the path `/incrementals` within the backup destination, rather than under each backup's path. This enables easier management of cloud-storage provider policies specifically applied to incremental backups. [#75970][#75970]
-
-
SQL language changes
-
-- Added a `sql.auth.resolve_membership_single_scan.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings), which changes the query for an internal role membership cache. Previously the code would recursively look up each role in the membership hierarchy, leading to multiple queries. With the setting on, it uses a single query. The setting is `true` by default. [#77359][#77359]
-- The [data type](https://www.cockroachlabs.com/docs/v22.1/data-types) of shard columns created for [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) has changed from `INT4` to `INT8`. This should have no effect on behavior or performance. [#76930][#76930]
-- Introduced the `sql.contention.resolver.queue_size` metric. This gauge metric gives the current length of the queue of contention events, each awaiting translation of its transaction ID into a transaction fingerprint ID. This metric can be used to assess the level of backlog unresolved contention events. [#77514][#77514]
-- Introduced the `sql.contention.resolver.retries` metric. This counter metric reflects the number of retries performed by the contention event store attempting to translate the transaction ID of the contention event into a transaction fingerprint ID. Any spike in this metric could indicate a possible anomaly in the transaction ID resolution protocol. [#77514][#77514]
-- Introduced the `sql.contention.resolver.failed_resolution` metric. This counter metric gives the total number of failed attempts to translate the transaction ID in the contention events into a transaction fingerprint ID. A spike in this metric indicates likely severe failure in the transaction ID resolution protocol. [#77514][#77514]
-- Added support for `date_trunc(string, interval)` for compatibility with PostgreSQL. This built-in function is required to support [Django 4.1](https://docs.djangoproject.com/en/dev/releases/4.1/). [#77508][#77508]
-- Introduced a `sql.contention.event_store.duration_threshold` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). This cluster setting specifies the minimum contention duration to cause the contention events to be collected into the `crdb_internal.transaction_contention_events` virtual table (default: `0`). [#77623][#77623]
-- Added support for super region functionality. Super regions allow the user to define a set of regions on the database such that any `REGIONAL BY TABLE` based in the super region or any `REGIONAL BY ROW` partition in the super region will have all their replicas in regions within the super region. The primary use is for [data domiciling](https://www.cockroachlabs.com/docs/v22.1/data-domiciling). Super regions are an experimental feature and are gated behind the session variable: `enable_super_regions`. The [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `sql.defaults.super_regions.enabled` is used to enable super regions (default: `false`). [#76620][#76620]
-
-
Operational changes
-
-- Added the `server.shutdown.connection_wait` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to the [draining process](https://www.cockroachlabs.com/docs/v22.1/node-shutdown#node-shutdown-sequence) configuration. This adds a draining phase where the server waits for SQL connections to be closed, and once all SQL connections are closed before timeout, the server proceeds to the next draining phase. This provides a workaround when customers encounter intermittent blips and failed requests when performing operations that are related to restarting nodes. [#72991][#72991]
-- The [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `admission.kv.tenant_weights.enabled` and `admission.kv.stores.tenant_weights.enabled` can now be used to enable tenant weights in multi-tenant storage servers (Defaults: `false`). Tenant weights are based on the number of ranges for each tenant, and allow for weighted fair sharing. [#77575][#77575]
-
-
Command-line changes
-
-- The `cockroach debug tsdump` command now allows viewing time-series data even in case of node failures by being re-run with the import filename set to `-`. [#77247][#77247]
-
-
DB Console changes
-
-- Added an alert banner on the [**Cluster Overview** page](https://www.cockroachlabs.com/docs/v22.1/ui-cluster-overview-page) that indicates when more than one node version is detected on the cluster. The alert lists the node versions detected and how many nodes are on each version. This provides more visibility into the progress of a cluster upgrade. [#76932][#76932]
-- The **Compactions/Flushes** graph on the [Storage dashboard](https://www.cockroachlabs.com/docs/v22.1/ui-storage-dashboard) now shows bytes written by these operations, and has been split into separate per-node graphs. [#77558][#77558]
-- The [**Explain Plan**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#explain-plans) tab of the [**Statement Details** page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page) now shows statistics for all the plans executed by the selected statement on the selected period. [#77632][#77632]
-- Active operations can now be inspected in a new **Active operations** page linked from the [**Advanced Debug** page](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages). This facilitates viewing active traces and taking snapshots. [#77712][#77712]
-
-
Bug fixes
-
-- Fixed a bug where clicking the "Reset SQL stats" button on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages caused, in DB Console, an infinite loading spinner and, in CockroachDB {{ site.data.products.cloud }} Console, the **Statements**/**Transactions** table to be reloaded without limiting to the time range that the user had selected. The button now correctly reloads the table according to the selected time in both DB Console and CockroachDB {{ site.data.products.cloud }} Console. [#77571][#77571]
-- Previously, the `information_schema` tables `administrable_role_authorizations` and `applicable_roles` were incorrectly always returning the current user for the grantee column. Now, the column will contain the correct role that was granted the parent role given in the `role_name` column. [#77359][#77359]
-- Fixed a bug that caused errors when attempting to create table statistics (with [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v22.1/create-statistics) or [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze)) for a table containing an index which indexed only [virtual computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns). This bug had been present since version v21.1.0. [#77507][#77507]
-- All automatic jobs are now hidden from the [Jobs page](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) of the DB Console. [#77331][#77331]
-- Added a limit of 7 concurrent asynchronous consistency checks per store, with an upper timeout of 1 hour. This prevents abandoned consistency checks from building up in some circumstances, which could lead to increasing disk usage as they held onto Pebble snapshots. [#77433][#77433]
-- Fixed a bug causing incorrect counts of `under_replicated_ranges` and `over_replicated_ranges` in the `crdb_internal.replication_stats` table for multi-region databases. [#76430][#76430]
-- Previously, intermittent validation failures could be observed on schema objects, where a job ID was detected as missing when validating objects in a transaction. This has been fixed. [#76532][#76532]
-- Previously, adding a [hash-sharded index](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) to a table watched by a changefeed could produce errors due to not distinguishing between backfills of visible columns and backfills of merely public ones, which may be hidden or inaccessible. This is now fixed. [#77316][#77316]
-- Fixed a bug that caused internal errors when `COALESCE` and `IF` expressions had inner expressions with different types that could not be cast to a common type. [#77608][#77608]
-- A zone config change event now includes the correct details of what was changed instead of incorrectly displaying `undefined`. [#77773][#77773]
-
-
Performance improvements
-
-- Improved the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer)'s cardinality estimates for predicates involving many constrained columns. This may result in better index selection for these queries. [#76786][#76786]
-- Improved jobs system resilience to scheduled jobs that may lock up jobs/scheduled job table for long periods of time. Each schedule now has a limited amount of time to complete its execution. The timeout is controlled via the `jobs.scheduler.schedule_execution.timeout` setting. [#77372][#77372]
-
-
-
-
Contributors
-
-This release includes 112 merged PRs by 50 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Steve Kuznetsov (first-time contributor)
-
-
-
-- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/monitor-and-debug-changefeeds) now record the message size histogram. [#77711][#77711]
-- Users can now perform initial scans on [newly added changefeed](https://www.cockroachlabs.com/docs/v22.1/alter-changefeed) targets by executing the following statement: `ALTER CHANGEFEED ADD WITH initial_scan`
- The default behavior is to perform no initial scans on newly added targets, but users can explicitly request this by replacing `initial_scan` with `no_initial_scan`. [#77263][#77263]
-- The value of the `server.child_metrics.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) is now set to `true`. [#77561][#77561]
-- CockroachDB now limits the number of concurrent catchup scan requests issued by [rangefeed](https://www.cockroachlabs.com/docs/v22.1/create-and-configure-changefeeds#enable-rangefeeds) clients. [#77866][#77866]
-
-
SQL language changes
-
-- TTL metrics are now labelled by relation name if the `server.child_metrics.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) is enabled and the `ttl_label_metrics` storage parameter is set to `true`. This is to prevent a potentially unbounded cardinality on TTL related metrics. [#77567][#77567]
-- Added support for the `MOVE` command, which moves a SQL cursor without fetching any rows from it. `MOVE` is identical to [`FETCH`](https://www.cockroachlabs.com/docs/v22.1/limit-offset), including in its arguments and syntax, except it doesn't return any rows. [#74877][#74877]
-- Added the `enable_implicit_transaction_for_batch_statements` [session variable](https://www.cockroachlabs.com/docs/v22.1/set-vars). It defaults to false. When true, multiple statements in a single query (a "batch statement") will all run in the same implicit transaction, which matches the PostgreSQL wire protocol. This setting is provided for users who want to preserve the behavior of CockroachDB versions v21.2 and lower. [#77865][#77865]
-- The `enable_implicit_transaction_for_batch_statements` session variable now defaults to false. [#77973][#77973]
-- The `experimental_enable_hash_sharded_indexes` session variable is deprecated as hash-sharded indexes are enabled by default. Enabling this setting results in a no-op. [#78038][#78038]
-- New `crdb_internal.merge_stats_metadata` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) to group statement statistics metadata. [#78064][#78064]
-- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/changefeeds-on-tables-with-column-families) can now specify column families to target, using the syntax `[TABLE] foo FAMILY bar`. For example, `CREATE CHANGEFEED FOR TABLE foo FAMILY bar, TABLE foo FAMILY baz, TABLE users` will create a feed that watches the `bar` and `baz` column families of `foo`, as well as the whole table `users`. A family must exist with that name when the feed is created. If all columns in a watched family are dropped in an `ALTER TABLE` statement, the feed will fail with an error, similar to dropping a table. The behavior is otherwise similar to feeds created using `split_column_families`. [#77964][#77964]
-- [Casts](https://www.cockroachlabs.com/docs/v22.1/data-types#data-type-conversions-and-casts) that are affected by the `DateStyle` or `IntervalStyle` session variables used in [computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) or [partial index](https://www.cockroachlabs.com/docs/v22.1/partial-indexes) definitions will be rewritten to use immutable functions after upgrading to v22.1. [#78229][#78229]
-- When the user runs [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup) on an encrypted incremental backup, they must set the `encyrption_info_dir` directory to the full backup directory in order for `SHOW BACKUP` to work. [#78096][#78096]
-- The [`BACKUP TO`](https://www.cockroachlabs.com/docs/v22.1/backup) syntax to take backups is deprecated, and will be removed in a future release. Create a backup collection using the `BACKUP INTO` syntax. [#78250][#78250]
-- Using the [`RESTORE FROM`](https://www.cockroachlabs.com/docs/v22.1/restore) syntax without an explicit subdirectory pointing to a backup in a collection is deprecated, and will be removed in a future release. Use `RESTORE FROM IN ` to restore a particular backup in a collection. [#78250][#78250]
-
-
Command-line changes
-
-- Fixed a bug where starting [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) with the `--global` flag would not simulate latencies correctly when combined with the `--insecure` flag. [#78169][#78169]
-
-
DB Console changes
-
-- Added full scan, distributed, and vectorized information of the plan displayed on [**Statement Details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page) page. [#78114][#78114]
-
-
Bug fixes
-
-- Fixed successive [schema change](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) backfills from skipping spans that were checkpointed by an initial backfill that was restarted. [#77797][#77797]
-- Fixed a bug where statements that arrived in a batch during the simple query protocol would all execute in their own implicit [transactions](https://www.cockroachlabs.com/docs/v22.1/transactions). Now, we match the PostgreSQL wire protocol behavior, so all these statements share the same implicit transaction. If a `BEGIN` is included in a statement batch, then the existing implicit transaction is upgraded to an explicit transaction. [#77865][#77865]
-- Fixed a bug in the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) that prevented expressions of the form `(NULL::STRING[] <@ ARRAY['x'])` from being folded to `NULL`. This bug was introduced in v21.2.0. [#78042][#78042]
-- Fixed broken links to the **Statement Details** page from the [**Advanced Debug**](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages) and [**Sessions**](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page) pages. [#78099][#78099]
-- Fixed a memory leak in the [Pebble](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#pebble) block cache. [#78260][#78260]
-
-
-
-- The volatility of cast operations between [strings](https://www.cockroachlabs.com/docs/v22.1/string) and [intervals](https://www.cockroachlabs.com/docs/v22.1/interval) or [timestamps](https://www.cockroachlabs.com/docs/v22.1/timestamp) has changed from immutable to stable. This means that these cast operations can no longer be used in computed columns or partial index definitions. Instead, use the following [built-in functions:](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) `parse_interval`, `parse_date`, `parse_time`, `parse_timetz`, `parse_timestamp`, or `to_char`. Upon upgrade to v22.1, CockroachDB will automatically rewrite any computed columns or partial indexes that use the affected casts to use the new built-in functions. [#78455][#78455]
-
-
Enterprise edition changes
-
-- Tenant GC job will now wait for protected timestamp records that target the tenant and have a protect time less than the tenant's drop time. [#78389][#78389]
-- Allow users to provide an end time for [changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) through the `end_time` option. When this option is provided, the changefeed will run until it has reached the end timestamp the user specified, and then the changefeed job will end with a successful status code. Furthermore, we now provide an `initial_scan_only` option. When this option is set, the changefeed job will run until the initial scan has completed, and then end with a successful status code. [#78381][#78381]
-- Do not block schema changes when executing core-style changefeeds. [#78360][#78360]
-
-
SQL language changes
-
-- Added support for `ALTER DATABASE ... ALTER SUPER REGION`. This command allows the user to change the regions of an existing super region. For example, after successful execution of the following, super region `test1` will consist of three regions, `ca-central-1`, `us-west-1`, and `us-east-1`.
- {% include_cached copy-clipboard.html %}
- ~~~sql
- ALTER DATABASE db3 ALTER SUPER REGION "test1" VALUES "ca-central-1", "us-west-1", "us-east-1";
- ~~~
- `ALTER SUPER REGION` follows the same rules as `ADD` or `DROP` super region. [#78462][#78462]
-
-- The [session variables](https://www.cockroachlabs.com/docs/v22.1/set-vars) `datestyle_enabled` and `intervalstyle_enabled`, and the [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `sql.defaults.datestyle.enabled` and `sql.defaults.intervalstyle.enabled` no longer have any effect. After upgrading to v22.1, these settings are effectively always interpreted as `true`. [#78455][#78455]
-- `BUCKET_COUNT` for [hash-sharded index](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) is now shown from the `crdb_internal.table_indexes` table. [#78625][#78625]
-- Implemented the [`COPY FROM ... ESCAPE ...`](https://www.cockroachlabs.com/docs/v22.1/copy-from) syntax. [#78417][#78417]
-- Disabled index recommendations in [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) output for [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview#regional-by-row-tables), as the previous recommendations were not valid. [#78676][#78676]
-- Added a `crdb_internal.validate_ttl_scheduled_jobs` built-in function. This verifies that each table points to a valid scheduled job which will action the deletion of expired rows. [#78373][#78373]
-- Added a `crdb_internal.repair_ttl_table_scheduled_job` built-in function, which repairs the given TTL table's scheduled job by supplanting it with a valid schedule. [#78373][#78373]
-
-
Operational changes
-
-- Added a new metric that charts the number of bytes received via snapshot on any given store. [#78464][#78464]
-- Bulk ingest operations like [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import), [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) or [`CREATE INDEX`](https://www.cockroachlabs.com/docs/v22.1/create-index) will now fail if they try to write to a node that has less than 5% storage capacity remaining, configurable via the [`kv.bulk_io_write.min_capacity_remaining_fraction`](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). [#78579][#78579]
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) jobs will now [pause](https://www.cockroachlabs.com/docs/v22.1/pause-job) if a node runs out of disk space. [#78587][#78587]
-- [`CREATE INDEX`](https://www.cockroachlabs.com/docs/v22.1/create-index) and some other schema changes will now [pause](https://www.cockroachlabs.com/docs/v22.1/pause-job) if a node is running out of disk space. [#78587][#78587]
-- [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) will now [pause](https://www.cockroachlabs.com/docs/v22.1/pause-job) if a node is running out of disk space. [#78587][#78587]
-
-
Command-line changes
-
-- [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) is reverted back to not run multi-tenant mode by default. [#78168][#78168]
-
-
DB Console changes
-
-- The [Replication Dashboard](https://www.cockroachlabs.com/docs/v22.1/ui-replication-dashboard) now includes a graph of snapshot bytes received per node. [#78580][#78580]
-- The [`_status/nodes` endpoint](https://www.cockroachlabs.com/docs/v22.1/monitoring-and-alerting) is now available to all users with the `VIEWACTIVITY` role option, not just admins. Also, in the DB Console, the **Nodes Overview** and **Node Reports** pages will now display unredacted information containing node hostnames and IP addresses for all users with the `VIEWACTIVITY` role option.[#78362][#78362]
-- Improved colors for status badges on the [Jobs](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) page. Three statuses on the Jobs page, `cancel-requested`, `pause-requested`, and `revert-failed`, previously had blue status badge colors that didn't properly reflect their meaning. This commit modifies the badge colors to indicate meaning. Now `cancel-requested` and `pause-requested` have gray badges and `revert-failed` has a red badge. [#78611][#78611]
-- Fixed a bug where a node in the `UNAVAILABLE` state would not have latency defined, causing the network page to crash. [#78628][#78628]
-
-
Bug fixes
-
-- CockroachDB may now fetch fewer rows when performing lookup and index joins on queries with a `LIMIT` clause. [#78473][#78473]
-- Fixed a bug whereby certain catalog interactions which occurred concurrently with node failures were not internally retried. [#78698][#78698]
-- Fixed a bug that caused the optimizer to generate invalid query plans which could result in incorrect query results. The bug, which has been present since version v21.1.0, can appear if all of the following conditions are true:
- 1. The query contains a semi-join, such as queries in the form: `SELECT * FROM t1 WHERE EXISTS (SELECT * FROM t2 WHERE t1.a = t2.a);`.
- 1. The inner table has an index containing the equality column, like `t2.a` in the example query.
- 1. The index contains one or more columns that prefix the equality column.
- 1. The prefix columns are `NOT NULL` and are constrained to a set of constant values via a `CHECK` constraint or an `IN` condition in the filter. [#78972][#78972]
-- Fixed a bug where the `LATEST` file that points to the latest full [backup](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups#full-backups) in a collection was written to a directory path with the wrong structure. [#78281][#78281]
-
-
Performance improvements
-
-- [Ranges](https://www.cockroachlabs.com/docs/v22.1/show-ranges) are split and rebalanced during bulk ingestion only when they become full, reducing unnecessary splits and merges. [#78328][#78328]
-- Unused JS files are no longer downloaded when the DB Console loads. [#78665][#78665]
-
-
-
-- Job scheduler is more efficient and should no longer lock up jobs and scheduled jobs tables. [#79328][#79328]
-- Removed the default values from the [`SHOW CHANGEFEED JOBS`](https://www.cockroachlabs.com/docs/v22.1/show-jobs#show-changefeed-jobs) output. [#79361][#79361]
-- Checkpoint files are no longer overwritten and now versioned and written side-by-side in the `/progress` directory. Temporary checkpoint files are no longer written. [#79314][#79314]
-- Changefeeds can now be distributed across pods in tenant environments. [#79303][#79303]
-
-
SQL language changes
-
-- Help text for creating indexes or primary key constraints no longer mentions `BUCKET_COUNT` because it can now be omitted and a default is used. [#79087][#79087]
-- Add support for show default privileges in schema. The [`SHOW DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v22.1/show-default-privileges) clause now supports optionally passing a schema name: `SHOW DEFAULT PRIVILEGES [opt_for_role] [opt_schema_name]` Example:
-
- ~~~ sql
- SHOW DEFAULT PRIVILEGES IN SCHEMA s2
- ~~~
- ~~~
- ----
- role for_all_roles object_type grantee privilege_type
- testuser false tables testuser2 DROP
- testuser false tables testuser2 SELECT
- testuser false tables testuser2 UPDATE
- ~~~
- ~~~ sql
- SHOW DEFAULT PRIVILEGES FOR ROLE testuser IN SCHEMA s2
- ~~~
- ~~~
- ----
- role for_all_roles object_type grantee privilege_type
- testuser false tables testuser2 DROP
- testuser false tables testuser2 SELECT
- testuser false tables testuser2 UPDATE
- ~~~
- [#79177][#79177]
-
-- Add support for `SHOW SUPER REGIONS FROM DATABASE`. Example:
-
- ~~~ sql
- SHOW SUPER REGIONS FROM DATABASE mr2
- ~~~
- ~~~
- ----
- mr2 ca-central-sr {ca-central-1}
- mr2 test {ap-southeast-2,us-east-1}
- ~~~
- [#79190][#79190]
-- When you run [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup) on collections you must now use the `FROM` keyword: `SHOW BACKUP FROM IN `. [#79116][#79116]
-- `SHOW BACKUP` without the `IN` keyword to specify a subdirectory is deprecated and will be removed from a future release. Users are recommended to only create collection based backups and view them with `SHOW BACKUP FROM IN `. [#79116][#79116]
-- Add extra logging for copy to the [`SQL_EXEC`](https://www.cockroachlabs.com/docs/v22.1/logging-overview#logging-channels) channel if the `sql.trace.log_statement_execute` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) is set. [#79298][#79298]
-- An error message is now logged to the `SQL_EXEC` channel when parsing fails. [#79298][#79298]
-- Introduced a `expect_and_ignore_not_visible_columns_in_copy` [session variable](https://www.cockroachlabs.com/docs/v22.1/set-vars). If this is set, [`COPY FROM`](https://www.cockroachlabs.com/docs/v22.1/copy-from) with no column specifiers will assume hidden columns are in the copy data, but will ignore them when applying `COPY FROM`. [#79189][#79189]
-- Changes the default value of `sql.zone_configs.allow_for_secondary_tenant.enabled` to be `false`. Moreover, this setting is no longer settable by secondary tenants. Instead, it's now a tenant read-only cluster setting. [#79160][#79160]
-- [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup) now reports accurate row and byte size counts on backups created by a tenant. [#79339][#79339]
-- Memory and disk usage are now reported for the lookup joins in [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze). [#79351][#79351]
-- Privileges on a database are no longer inherited to tables/schemas if a table/schema is created in that database. For example, `GRANT ALL ON DATABASE TEST TO foo`; `CREATE TABLE test.t()` no longer results in `foo` having `ALL` on the table. Users should rely on default privileges instead. You can achieve the same behavior by doing `USE test; ALTER DEFAULT PRIVILEGES GRANT ALL ON TABLES TO foo;` [#79509][#79509]
-- The `InvalidPassword` error code is now returned if the password is invalid or the user does not exist when authenticating. [#79515][#79515]
-
-
Operational changes
-
-- The `kv.allocator.load_based_rebalancing_interval` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) now lets operators set the interval at which each store in the cluster will check for load-based lease or replica rebalancing opportunities. [#79073][#79073]
-- [Rangefeed](https://www.cockroachlabs.com/docs/v22.1/create-and-configure-changefeeds#enable-rangefeeds) memory budgets have a [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `kv.rangefeed.memory_budgets.enabled` that disables memory budgeting for all new feeds. This setting could be used on CockroachDB {{ site.data.products.dedicated }} clusters to disable budgeting as a mitigation for bugs for example if feeds abort while nodes have sufficient free memory. [#79321][#79321]
-- Rangefeed memory budgets could be disabled on the fly when cluster setting is changed without the need to restart the feed. [#79321][#79321]
-
-
DB Console changes
-
-- Minor styling changes on [**Hot Ranges**](https://www.cockroachlabs.com/docs/v22.1/ui-hot-ranges-page) page to follow the same style as other pages. [#79501][#79501]
-- On the [**Statement Details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page) page, changed the order of tabs to **Overview**, **Explain Plan**, **Diagnostics**, and **Execution Stats** and changed the **Explain Plan** tab to **Explain Plan**s (plural). [#79234][#79234]
-
-
Bug fixes
-
-- Fixes a NPE during the cleanup of a failed or cancelled [`RESTORE` ](https://www.cockroachlabs.com/docs/v22.1/restore) job. [#78992][#78992]
-- Fix [`num_runs`](https://www.cockroachlabs.com/docs/v22.1/show-jobs) being incremented twice for certain jobs upon being started. [#79052][#79052]
-- A bug has been fixed that caused errors when trying to evaluate queries with `NULL` values annotated as a tuple type, such as `NULL:::RECORD`. This bug was present since version 19.1. [#78531][#78531]
-- [`ALTER TABLE [ADD|DROP] COLUMN`](https://www.cockroachlabs.com/docs/v22.1/alter-table) are now subject to [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control), which will prevent these operations from overloading the storage engine. [#79209][#79209]
-- Index usage stats are now properly captured for index joins. [#79241][#79241]
-- [`SHOW SCHEMAS FROM `](https://www.cockroachlabs.com/docs/v22.1/show-schemas) now includes user-defined schemas. [#79308][#79308]
-- A distributed query that results in an error on the remote node no longer has an incomplete trace. [#79193][#79193]
-- [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v22.1/import-into) no longer creates duplicate entries with [`UNIQUE`](https://www.cockroachlabs.com/docs/v22.1/unique) constraints in [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview#regional-by-row-tables) and tables utilizing `UNIQUE WITHOUT INDEX` constraints. A new post-`IMPORT` validation step for those tables now fails and rolls back the `IMPORT` in such cases. [#79323][#79323]
-- Fixed a bug in IO which could result in [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control) failing to rate limit when traffic was stalled such that no work was admitted, despite the store's being in an unhealthy state. [#79343][#79343]
-- The execution time as reported on [`DISTSQL`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze#explain-analyze-distsql) diagrams within the statement bundle collected via [`EXPLAIN ANALYZE (DEBUG)`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze#debug-option) is no longer negative when the statement encountered an error. [#79373][#79373]
-- CockroachDB reports fewer "memory budget exceeded" errors when performing [lookup joins](https://www.cockroachlabs.com/docs/v22.1/joins#lookup-joins). [#79351][#79351]
-- `LIMIT` queries with an `ORDER BY` clause that scan the index of a virtual system tables, such as `pg_type`, no longer return incorrect results. [#79460][#79460]
-- [`nextval` and `setval`](https://www.cockroachlabs.com/docs/v22.1/create-sequence#sequence-functions) are non-transactional except when it is called in the same transaction that the sequence was created in. This change prevents a bug where creating a sequence and calling `nextval` and `setval` on it within a transaction caused the query containing `nextval` to hang. [#79506][#79506]
-- A bug has been fixed that caused the optimizer to generate query plans with logically incorrect lookup joins. The bug can only occur in queries with an inner join, e.g., `t1 JOIN t2`, if all of the following are true:
-
- - The join contains an equality condition between columns of both tables, e.g., `t1.a = t2.a`.
- - A query filter or `CHECK` constraint constrains a column to a set of specific values, e.g., `t2.b IN (1, 2, 3)`. In the case of a `CHECK` constraint, the column must be `NOT NULL`.
- - A query filter or `CHECK` constraint constrains a column to a range, e.g., `t2.c > 0`. In the case of a `CHECK` constraint, the column must be `NOT NULL`.
- - An index contains a column from each of the criteria above, e.g., `INDEX t2(a, b, c)`. This bug has been present since version 21.2.0. [#79504][#79504]
-- A bug has been fixed which caused the optimizer to generate invalid query plans which could result in incorrect query results. The bug, which has been present since v21.1.0, can appear if all of the following conditions are true:
-
- - The query contains a semi-join, such as queries in the form `SELECT * FROM a WHERE EXISTS (SELECT * FROM b WHERE a.a @> b.b)`.
- - The inner table has a multi-column inverted index containing the inverted column in the filter.
- - The index prefix columns are constrained to a set of values via the filter or a `CHECK` constraint, e.g., with an `IN` operator. In the case of a `CHECK` constraint, the column is `NOT NULL`.
- [#79504][#79504]
-
-
Performance improvements
-
-- Uniqueness checks performed for inserts into [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview#regional-by-row-tables) no longer search all regions for duplicates. In some cases, these checks will now only search a subset of regions when inserting a single row of constant values. [#79251][#79251]
-- Bulk ingestion writes now use a lower priority for [admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control). [#79352][#79352]
-- Browser caching of files loaded in DB Console is now supported. [#79382][#79382]
-
-
-
-- Unified the syntax for defining the behavior of initial scans on [changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) by extending the [`initial_scan`](https://www.cockroachlabs.com/docs/v22.1/create-changefeed#initial-scan) option to accept three possible values: `yes`, `no`, or `only`. [#79471][#79471]
-- Changefeeds can now target tables with [more than one column family](https://www.cockroachlabs.com/docs/v22.1/changefeeds-on-tables-with-column-families) using either the [`split_column_families` option](https://www.cockroachlabs.com/docs/v22.1/create-changefeed#split-column-families) or the `FAMILY` keyword. Changefeeds will emit individual messages per column family on a table. [#79448][#79448]
-- The `full_table_name` option is now supported for all [changefeed](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) sinks. [#79448][#79448]
-- `LATEST` files are no longer overwritten and are now versioned and written in the `/metadata/latest` directory for non-mixed-version clusters. [#79553][#79553]
-- Previously, the [`ALTER CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/alter-changefeed) statement would not work with changefeeds that use fully qualified names in their [`CREATE CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/create-changefeed) statement. This is now fixed by ensuring that each existing target is added with its fully qualified name so that it can be resolved in validation checks. Every changefeed will now display the fully qualified name of every target in the [`SHOW CHANGEFEED JOB`](https://www.cockroachlabs.com/docs/v22.1/show-jobs) query. [#79745][#79745]
-- Added a `changefeed.backfill.scan_request_size` setting to control scan request size during [backfill](https://www.cockroachlabs.com/docs/v22.1/changefeed-messages#schema-changes-with-column-backfill). [#79710][#79710]
-
-
SQL language changes
-
-- CockroachDB now ensures the user passes the same number of locality-aware URIs for the full [backup](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) destination as the `incremental_location` parameter (for example, `BACKUP INTO LATEST IN ($1, $2, $3) WITH incremental_location = ($4, $5, $6)`). [#79600][#79600]
-- `EXPLAIN (DDL)`, when invoked on statements supported by the declarative schema changer, prints a plan of what the schema changer will do. This can be useful for anticipating the complexity of a schema change (for example, anything involving backfill or validation operations might be slow to run) and for troubleshooting. `EXPLAIN (DDL, VERBOSE)` produces a more detailed plan. [#79780][#79780]
-
-
Operational changes
-
-- Added a new time-series metric, `storage.marked-for-compaction-files`, for the count of files marked for compaction. This is useful for monitoring storage-level background migrations. [#79370][#79370]
-- [Changefeed](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) creation and failure event logs are now emitted to the [`TELEMETRY`](https://www.cockroachlabs.com/docs/v22.1/eventlog#telemetry-events) logging channel. [#79749][#79749]
-
-
Command-line changes
-
-- Introduced a new `ttllogger` [workload](https://www.cockroachlabs.com/docs/v22.1/cockroach-workload) which creates a TTL table emulating a "log" with rows expiring after the duration specified in the `--ttl` flag. [#79482][#79482]
-
-
DB Console changes
-
-- The [Hot Ranges page](https://www.cockroachlabs.com/docs/v22.1/ui-hot-ranges-page) now allows filtering by column. [#79647][#79647]
-- Added status of automatic statistics collection to the [Databases](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page) and Databases [table details](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page#table-details) pages. [#76168][#76168]
-- Added timestamp of last statistics collection to the Databases > [Tables](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page#tables-view) and Databases table details pages. [#76168][#76168]
-
-
Bug fixes
-
-- Previously, privileges for restored tables were being generated incorrectly without taking into consideration their parent schema's default privilege descriptor. This is now fixed. [#79534][#79534]
-- Fixed a bug that caused an internal error when the inner expression of a column access expression evaluated to `NULL`. For example, evaluation of the expression `(CASE WHEN b THEN ((ROW(1) AS a)) ELSE NULL END).a` would error when `b` is `false`. This bug was present since v19.1 or earlier. [#79529][#79529]
-- Fixed a bug that caused an error when accessing a named column of a labeled tuple. The bug only occurred when an expression could produce one of several different tuples. For example, `(CASE WHEN true THEN (ROW(1) AS a) ELSE (ROW(2) AS a) END).a` would fail to evaluate. This bug was present since v22.1.0. Although present in previous versions, it was impossible to encounter due to limitations that prevented using tuples in this way. [#79529][#79529]
-- Previously, queries reading from an index or primary key on `FLOAT` or `REAL` columns `DESC` would read `-0` for every `+0` value stored in the index. This has been fixed to correctly read `+0` for `+0` and `-0` for `-0`. [#79533][#79533]
-- Fixed some cases where a job or schema change that had encountered an error would continue to execute for some time before eventually failing. [#79713][#79713]
-- Previously, the optional `is_called` parameter of the `setval` function would default to `false` when not specified. It now defaults to `true` to match PostgreSQL behavior. [#79779][#79779]
-- On the [Raft Messages](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages) page, the date picker and drag-to-zoom functionality are now fixed. [#79791][#79791]
-- Fixed a bug where Pebble compaction heuristics could allow a large compaction backlog to accumulate, eventually leading to high read amplification. [#79597][#79597]
-
-
-
-- Users can no longer define the subdirectory of their full backup. This deprecated syntax can be enabled by changing the new `bulkio.backup.deprecated_full_backup_with_subdir` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to `true`. [#80145][#80145]
-
-
SQL language changes
-
-- Introduced a new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings), `sql.multi_region.allow_abstractions_for_secondary_tenants.enabled`, to allow the operator to control if a secondary tenant can make use of [multi-region abstractions](https://www.cockroachlabs.com/docs/v22.1/migrate-to-multiregion-sql#replication-zone-patterns-and-multi-region-sql-abstractions). [#80013][#80013]
-- Introduced new `cloudstorage..write.node_rate_limit` and `cloudstorage..write.node_burst_limit` [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to limit the rate at which bulk operations write to the designated cloud storage provider. [#80243][#80243]
-
-
Command-line changes
-
-- [`COPY ... FROM STDIN`](https://www.cockroachlabs.com/docs/v22.1/copy-from) now works from the [`cockroach` CLI](https://www.cockroachlabs.com/docs/v22.1/cockroach-commands). Note that it is not supported inside transactions. [#79819][#79819]
-- The mechanism for query cancellation is disabled in the [`sql` shell](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) until a later patch release. [#79740][#79740]
-
-
DB Console changes
-
-- Statements are no longer separated by aggregation interval on the [Statement Page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page). Now, all statements with the same fingerprint show as a single row. [#80137][#80137]
-
-
Operational changes
-
-- If a user does not pass a subdirectory in their backup command, CockroachDB will only ever attempt to create a full backup. Previously, a backup command with [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v22.1/as-of-system-time) and no subdirectory would increment on an existing backup if the `AS OF SYSTEM TIME` backup’s resolved subdirectory equaled the existing backup’s directory. Now, an error is thrown. [#80145][#80145]
-
-
Bug fixes
-
-- HTTP 304 responses no longer result in error logs. [#79855][#79855]
-- Fixed a bug that may have caused a panic if a Kafka server being written to by a [`changefeed`](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks) failed at the wrong moment. [#79908][#79908]
-- Fixed a bug that would prevent CockroachDB from resolving the public schema if a [`changefeed`](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks) is created with a cursor timestamp prior to when the public schema migration happened. [#80165][#80165]
-- Fixed a bug where running an [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v22.1/as-of-system-time) incremental backup with an end time earlier than the previous backup's end time could lead to an incremental backup chain in the wrong order. Now, an error is thrown if the time specified in `AS OF SYSTEM TIME` is earlier than the previous backup's end time. [#80145][#80145]
-
-
Performance improvements
-
-- Running multiple [schema changes](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) concurrently is now more efficient. [#79950][#79950]
-- Performing a rollback of a [`CREATE TABLE AS`](https://www.cockroachlabs.com/docs/v22.1/create-table-as) statement with large quantities of data has similar performance to using [`DROP TABLE`](https://www.cockroachlabs.com/docs/v22.1/drop-table). [#79601][#79601]
-
-
-
-- `crdb_internal.reset_sql_stats()` and `crdb_internal.reset_index_usage_stats()` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#system-info-functions) now check if the user has the [admin role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role). [#80384][#80384]
-
-- SCRAM authentication and password encryption are not enabled by default. [#80248][#80248]
-
-
Enterprise edition changes
-
-- [Backups](https://www.cockroachlabs.com/docs/v22.1/take-full-and-incremental-backups) run by secondary tenants now write protected timestamp records to protect their target schema objects from garbage collection during backup execution. [#80670][#80670]
-
-
SQL language changes
-
-- The [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `cloudstorage..read.node_rate_limit` and `cloudstorage..read.node_burst_limit` can now be used to limit throughput when reading from cloud storage during a [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) or [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import). [#80281][#80281]
-
-
Bug fixes
-
-- Fixed a bug where automatic encryption-at-rest data key rotation would become disabled after a node restart without a store key rotation. [#80564][#80564]
-
-- Fixed a bug whereby the cluster version could regress due to a race condition. [#80712][#80712]
-
-
Performance improvements
-
-- Bulk ingestion of unsorted data during [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) and schema changes now uses a higher level of parallelism to send produced data to the [storage layer](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer). [#80487][#80487]
-
-
Miscellaneous
-
-
Docker
-
-- Refactored the initialization process of the Docker image to accommodate the use case with memory storage. [#80558][#80558]
-
-
-
-- Fixed a very rare case where CockroachDB could incorrectly evaluate queries with an [`ORDER BY`](https://www.cockroachlabs.com/docs/v22.1/order-by) clause when the prefix of ordering was already provided by the index ordering of the scanned table. [#80715][#80715]
-
-- Fixed a rare crash when encountering a nil-pointer deference in `google.golang.org/grpc/internal/transport.(*Stream).Context(...)`. [#80936][#80936]
-
-
Contributors
-
-This release includes 3 merged PRs by 3 authors.
-
-[#80715]: https://github.com/cockroachdb/cockroach/pull/80715
-[#80936]: https://github.com/cockroachdb/cockroach/pull/80936
diff --git a/src/current/_includes/releases/v22.1/v22.1.0.md b/src/current/_includes/releases/v22.1/v22.1.0.md
deleted file mode 100644
index 435efb9bc93..00000000000
--- a/src/current/_includes/releases/v22.1/v22.1.0.md
+++ /dev/null
@@ -1,153 +0,0 @@
-## v22.1.0
-
-Release Date: May 24, 2022
-
-With the release of CockroachDB v22.1, we've made a variety of management, performance, security, and compatibility improvements. Check out a [summary of the most significant user-facing changes](#v22-1-0-feature-summary) and then [upgrade to CockroachDB v22.1](https://www.cockroachlabs.com/docs/v22.1/upgrade-cockroach-version). For a release announcement with further focus on key features, see the [v22.1 blog post](https://www.cockroachlabs.com/blog/cockroachdb-22-1-release/).
-
-We're running a packed [schedule of launch events](https://www.cockroachlabs.com/cockroachdb-22-1-launch/) over the next few weeks, which include two opportunities to win limited-edition swag. Join our [Office Hours session](https://www.cockroachlabs.com/webinars/22-1-release-office-hours/) for all your questions, a [coding livestream](http://twitch.tv/itsaydrian) where we'll play with new features, and a [live talk on building and preparing for scale](https://www.cockroachlabs.com/webinars/scale-happens/).
-
-{% include releases/release-downloads-docker-image.md release=include.release %}
-
-
-
-This section summarizes the most significant user-facing changes in v22.1.0. For a complete list of features and changes, including bug fixes and performance improvements, see the [release notes]({% link releases/index.md %}#testing-releases) for previous testing releases. You can also search for [what's new in v22.1 in our docs](https://www.cockroachlabs.com/docs/search?query=new+in+v22.1).
-
-{{site.data.alerts.callout_info}}
-"Core" features are freely available in the core version of CockroachDB and do not require an enterprise license. "Enterprise" features require an [enterprise license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). [CockroachDB {{ site.data.products.cloud }} clusters](https://cockroachlabs.cloud/) include all enterprise features. You can also use [`cockroach demo`](https://www.cockroachlabs.com/docs/v22.1/cockroach-demo) to test enterprise features in a local, temporary cluster.
-{{site.data.alerts.end}}
-
-- [SQL](#v22-1-0-sql)
-- [Recovery and I/O](#v22-1-0-recovery-and-i-o)
-- [Database operations](#v22-1-0-database-operations)
-- [Backward-incompatible changes](#v22-1-0-backward-incompatible-changes)
-- [Deprecations](#v22-1-0-deprecations)
-- [Known limitations](#v22-1-0-known-limitations)
-- [Education](#v22-1-0-education)
-
-
-
-
SQL
-
-
-
-| Version | Feature | Description |
-----------+---------+--------------
-| Core | Hash-sharded indexes | [Hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) distribute sequential data across multiple nodes within your cluster, eliminating hotspots in certain types of scenarios. This feature is now generally available (GA) after a previous experimental release. |
-| Core | Super regions | [Super regions](https://www.cockroachlabs.com/docs/v22.1/add-super-region) allow you to define a set of regions on the database such that any `REGIONAL BY TABLE` table based in the super region or any `REGIONAL BY ROW` partition in the super region will have all their replicas in regions that are also within the super region. Their primary use is for [data domiciling](https://www.cockroachlabs.com/docs/v22.1/data-domiciling). This feature is in preview release. |
-| Core | Support for AWS DMS | Support for [AWS Database Migration Service (AWS DMS)](https://www.cockroachlabs.com/docs/v22.1/third-party-database-tools#schema-migration-tools) allows users to migrate data from an existing database to CockroachDB. |
-| Core | Admission control | [Admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control) helps maintain cluster performance and availability when some nodes experience high load. This was previously available as a preview release but is now generally available and enabled by default. |
-| Core | Set a quality of service (QoS) level for SQL sessions with admission control | In an overload scenario where CockroachDB cannot service all requests, you can identify which requests should be prioritized by setting a _quality of service_ (QoS). [Admission control](https://www.cockroachlabs.com/docs/v22.1/admission-control) queues work throughout the system. You can [set the QoS level](https://www.cockroachlabs.com/docs/v22.1/admission-control#set-quality-of-service-level-for-a-session) on its queues for SQL requests submitted in a session to `background`, `regular`, or `critical`. |
-| Core | Rename objects within the transaction that creates them | It is now possible to swap names for tables and other objects within the same transaction that creates them. For example: `CREATE TABLE foo(); BEGIN; ALTER TABLE foo RENAME TO bar; CREATE TABLE foo(); COMMIT;` |
-| Core | Drop `ENUM` values using `ALTER TYPE...DROP VALUE` | Drop a specific value from the user-defined type's list of values. The [`ALTER TYPE...DROP VALUE` statement](https://www.cockroachlabs.com/docs/v22.1/alter-type) is now available by default to all instances. It was previously disabled by default, requiring the cluster setting enable_drop_enum_value to enable it. |
-| Core | Support the `UNION` variant for recursive CTE | For compatibility with PostgreSQL, `WITH RECURSIVE...UNION` statements are now supported in [recursive common table expressions](https://www.cockroachlabs.com/docs/v22.1/common-table-expressions#recursive-common-table-expressions). |
-| Core | Locality optimized search supports `LIMIT` clauses | Queries with a `LIMIT` clause on a single table, either explicitly written or implicit such as in an uncorrelated EXISTS subquery, now [scan that table with improved latency](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#locality-optimized-search-in-multi-region-clusters) if the table is defined with `LOCALITY REGIONAL BY ROW` and the number of qualified rows residing in the local region does not exceed the hard limit (the sum of the `LIMIT` clause and optional `OFFSET` clause values). This optimization is only applied if the hard limit is 100,000 or less. |
-| Core | Surface errors for testing retry logic | To help enable developers test their application's retry logic, they can set the session variable `inject_retry_errors_enabled` so that any statement that is a not a `SET` statement will [return a transaction retry error](https://www.cockroachlabs.com/docs/v22.1/transactions#testing-transaction-retry-logic) if it is run inside of an explicit transaction. |
-| Core | Row Level TTL (preview release) | With Time to Live ("TTL") expiration on table rows, also known as [Row-Level TTL](https://www.cockroachlabs.com/docs/v22.1/row-level-ttl), CockroachDB automatically deletes rows once they have been stored longer than their specified expiration time. This avoids the complexities and potential performance impacts of managing expiration at the application level. See the documentation for Limitations that are part of this preview release. |
-| Core | `DATE` and `INTERVAL` style settings available by default | The session variables `datestyle_enabled` and `intervalstyle_enabled`, and the cluster settings `sql.defaults.datestyle.enabled` and `sql.defaults.intervalstyle.enabled` no longer have any effect. When the upgrade to v22.1 is finalized, all of these settings are effectively interpreted as `true`, enabling the use of the `intervalstyle` and `datestyle` session and cluster settings. |
-| Core | Optimized node draining with `connection_wait` | If you cannot tolerate connection errors during node drain, you can now change the `server.shutdown.connection_wait` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to allow SQL client connections to gracefully close before CockroachDB forcibly closes them. For guidance, see [Node Shutdown](https://www.cockroachlabs.com/docs/v22.1/node-shutdown#server-shutdown-connection_wait). |
-| Core | PostgreSQL wire protocol query cancellation | In addition to the `CANCEL QUERY SQL` statement, developers can now use the [cancellation method specified by the PostgreSQL wire protocol](https://www.cockroachlabs.com/docs/v22.1/cancel-query#considerations). |
-| Core | Gateway node connection limits | To control the maximum number of non-superuser ([`root`](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#root-user) user or other [`admin` role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role)) connections a [gateway node](https://www.cockroachlabs.com/docs/v22.1/architecture/sql-layer#gateway-node) can have open at one time, use the `server.max_connections_per_gateway` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). If a new non-superuser connection would exceed this limit, an error message and code are returned. |
-| Core | Support for `WITH GRANT OPTION` privilege | See [Security](#v22-1-0-security). |
-| Core | Transaction contention events | [Transaction contention events](https://www.cockroachlabs.com/docs/v22.1/crdb-internal#transaction_contention_events) enable you to determine where contention is occurring in real-time for affected active statements, and historically for past statements. |
-| Core | Index recommendations | [Index recommendations](https://www.cockroachlabs.com/docs/v22.1/explain#default-statement-plans) indicate when your query would benefit from an index and provide a suggested statement to create the index. |
-
-
Developer Experience
-
-| Version | Feature | Description |
-----------+---------+--------------
-| Core | Support for Prisma | CockroachDB now supports the [Prisma ORM](https://www.prisma.io/blog/prisma-preview-cockroach-db-release). A new [tutorial and example app](https://www.cockroachlabs.com/docs/v22.1/build-a-nodejs-app-with-cockroachdb-prisma) are available. |
-| Core | Lightweight `cockroach-sql` executable | A new client-only SQL shell for users who do not operate the cluster themselves. |
-
-
Recovery and I/O
-
-| Version | Feature | Description |
-----------+---------+--------------
-| Enterprise | Alter changefeeds | The new SQL statement [ALTER CHANGEFEED](https://www.cockroachlabs.com/docs/v22.1/alter-changefeed) enables users to modify active changefeeds, preventing the need to start a new changefeed. |
-| Enterprise | Track metrics per changefeed | Create [labels for capturing a metric](https://www.cockroachlabs.com/docs/v22.1/monitor-and-debug-changefeeds#using-changefeed-metrics-labels) across one or more specified changefeeds. This is an experimental feature that you can enable using a cluster setting. |
-| Core | Changefeed support for tables with multiple column families | Changefeeds can now target [tables with more than one column family](https://www.cockroachlabs.com/docs/v22.1/changefeeds-on-tables-with-column-families) using either the `split_column_families` option or the `FAMILY` keyword. Changefeeds will emit individual messages per column family on a table. |
-| Enterprise | Stream data to Google Cloud Pub/Sub | Changefeeds can now [stream data to a Pub/Sub sink](https://www.cockroachlabs.com/docs/v22.1/changefeed-examples#create-a-changefeed-connected-to-a-google-cloud-pub-sub-sink). |
-| Core | Export to the Apache Parquet format | Using a `SQL EXPORT `statement, users can now choose to [export data to the Parquet format](https://www.cockroachlabs.com/docs/v22.1/export). |
-| Core | Backup encryption enhancements | See [Security](#v22-1-0-security). |
-| Core | Select an S3 storage class for backups | [Associate your backup objects with a specific storage class](https://www.cockroachlabs.com/docs/v22.1/backup#back-up-with-an-s3-storage-class) in your Amazon S3 bucket. |
-| Core | Exclude a table's data from backups | [Exclude a table's row data from a backup](https://www.cockroachlabs.com/docs/v22.1/create-table#create-a-table-with-data-excluded-from-backup). This may be useful for tables with high-churn data that you would like to garbage collect more quickly than the incremental backup schedule. |
-| Core | Store incremental backups in custom locations | Specify a different [storage location for incremental backups](https://www.cockroachlabs.com/docs/v22.1/backup#create-incremental-backups) using the new BACKUP option `incremental_location`. This makes it easier to retain full backups longer than incremental backups, as is often required for compliance reasons. |
-| Core | Rename database on restore | An optional `new_db_name` clause on [`RESTORE DATABASE`](https://www.cockroachlabs.com/docs/v22.1/restore#databases) statements allows the user to rename the database they intend to restore. This can be helpful in disaster recovery scenarios when restoring to a temporary state. |
-
-
Database operations
-
-| Version | Feature | Description |
-----------+---------+--------------
-| Core | DB Console access from a specified node | On the Advanced Debug page, DB Console access can be [routed from the currently accessed node](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages#license-and-node-information) to a specific node on the cluster. |
-| Core | Alerting rules | Every CockroachDB node exports an [alerting rules template](https://www.cockroachlabs.com/docs/v22.1/monitoring-and-alerting#prometheus-alerting-rules-endpoint) at `http://:/api/v2/rules/`. These rule definitions are formatted for easy integration with Prometheus' Alertmanager. |
-| Core | `NOSQLLOGIN` role option | The `NOSQLLOGIN` [role option](https://www.cockroachlabs.com/docs/v22.1/create-role#role-options) grants a user access to the DB Console without also granting SQL shell access. |
-| Core | Hot ranges observability | The [Hot Ranges page](https://www.cockroachlabs.com/docs/v22.1/ui-hot-ranges-page) of the DB Console provides details about ranges receiving a high number of reads or writes. |
-| Core | Per-replica circuit breakers | When individual ranges become temporarily unavailable, requests to those ranges are refused by a [per-replica "circuit breaker" mechanism](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#per-replica-circuit-breakers) instead of hanging indefinitely. |
-
-
Security
-
-| Version | Feature | Description |
-----------+---------+--------------
-| Core | Support of Google Cloud KMS for encrypted backups | Google Cloud KMS is now supported as a key management system for [encrypted BACKUP and RESTORE operations](https://www.cockroachlabs.com/docs/v22.1/take-and-restore-encrypted-backups). |
-| Enterprise | Rotate backup encryption keys | Keep your backups secure by rotating the AWS or Google Cloud KMS keys you use to encrypt your backups and adding them to an existing key chain using the new [ALTER BACKUP](https://www.cockroachlabs.com/docs/v22.1/alter-backup) statement. |
-| Core | Support for `WITH GRANT OPTION` privilege | Users granted a privilege with [`WITH GRANT OPTION`](https://www.cockroachlabs.com/docs/v22.1/grant) can in turn grant that privilege to others. The owner of an object implicitly has the `GRANT OPTION` for all privileges, and the `GRANT OPTION` is inherited through role memberships. This matches functionality offered in PostgreSQL. |
-| Core | Support client-provided password hashes for credential definitions | CockroachDB now [recognizes pre-computed password hashes](https://www.cockroachlabs.com/docs/v22.1/security-reference/scram-authentication#server-user_login-store_client_pre_hashed_passwords-enabled) when presented to the regular `PASSWORD` option when creating or updating a role. |
-| Core | Support SCRAM-SHA-256 SASL authentication method | CockroachDB is now able to [authenticate users](https://www.cockroachlabs.com/docs/v22.1/security-reference/authentication) via the DB Console and SQL sessions when the client provides a cleartext password and the stored credentials are encoded [using the SCRAM-SHA-256 algorithm](https://www.cockroachlabs.com/docs/v22.1/security-reference/scram-authentication). For SQL client sessions, authentication methods `password` (cleartext passwords) and `cert-password` (TLS client cert or cleartext password) with either CRDB-BCRYPT or SCRAM-SHA-256 stored credentials can now be used. Previously, only CRDB-BCRYPT stored credentials were supported for cleartext password authentication. |
-| Core | Support HSTS headers to enforce HTTPS | Clusters can now be configured to send HSTS headers with HTTP requests in order to enable browser-level enforcement of HTTPS for the cluster host. Once the headers are present, after an initial request, browsers will force HTTPS on all subsequent connections to the host. This reduces the possibility of MitM attacks, to which HTTP-to-HTTPS redirects are vulnerable. |
-
-
Backward-incompatible changes
-
-Before [upgrading to CockroachDB v22.1](https://www.cockroachlabs.com/docs/v22.1/upgrade-cockroach-version), be sure to review the following backward-incompatible changes and adjust your deployment as necessary.
-
-- Using [`SESSION_USER`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#special-syntax-forms) in a projection or `WHERE` clause now returns the `SESSION_USER` instead of the `CURRENT_USER`. For backward compatibility, use [`session_user()`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#system-info-functions) for `SESSION_USER` and `current_user()` for `CURRENT_USER`. [#70444][#70444]
-- Placeholder values (e.g., `$1`) can no longer be used for role names in [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v22.1/alter-role) statements or for role names in [`CREATE ROLE`](https://www.cockroachlabs.com/docs/v22.1/create-role)/[`DROP ROLE`](https://www.cockroachlabs.com/docs/v22.1/drop-role) statements. [#71498][#71498]
-- Support has been removed for:
- - `IMPORT TABLE ... CREATE USING`
- - `IMPORT TABLE ... DATA`
- refers to CSV, Delimited, PGCOPY, or AVRO. These are formats that do not define the table schema in the same file as the data. The workaround following this change is to use `CREATE TABLE` with the same schema that was previously being passed into the IMPORT statement, followed by an `IMPORT INTO` the newly created table.
-- Non-standard [`cron`](https://wikipedia.org/wiki/Cron) expressions that specify seconds or year fields are no longer supported. [#74881][#74881]
-- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) will now filter out [virtual computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) from events by default. [#74916][#74916]
-- The [environment variable](https://www.cockroachlabs.com/docs/v22.1/cockroach-commands#environment-variables) that controls the max amount of CPU that can be taken by password hash computations during authentication was renamed from `COCKROACH_MAX_BCRYPT_CONCURRENCY` to `COCKROACH_MAX_PW_HASH_COMPUTE_CONCURRENCY`. Its semantics remain unchanged. [#74301][#74301]
-- The volatility of cast operations between [strings](https://www.cockroachlabs.com/docs/v22.1/string) and [intervals](https://www.cockroachlabs.com/docs/v22.1/interval) or [timestamps](https://www.cockroachlabs.com/docs/v22.1/timestamp) has changed from immutable to stable. This means that these cast operations can no longer be used in computed columns or partial index definitions. Instead, use the following [built-in functions:](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) `parse_interval`, `parse_date`, `parse_time`, `parse_timetz`, `parse_timestamp`, or `to_char`. Upon upgrade to v22.1, CockroachDB will automatically rewrite any computed columns or partial indexes that use the affected casts to use the new built-in functions. [#78455][#78455]
-- Users can no longer define the subdirectory of their full backup. This deprecated syntax can be enabled by changing the new `bulkio.backup.deprecated_full_backup_with_subdir` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to `true`. [#80145][#80145]
-
-
Deprecations
-
-- Using the [`cockroach node drain`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) command without specifying a node ID is deprecated. [#73991][#73991]
-- The flag `--self` of the [`cockroach node decommission` command](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) is deprecated. Instead, operators should specify the node ID of the target node as an explicit argument. The node that the command is connected to should not be a target node. [#74319][#74319]
-- The `experimental_enable_hash_sharded_indexes` session variable is deprecated as hash-sharded indexes are enabled by default. Enabling this setting results in a no-op. [#78038][#78038]
-- The [`BACKUP TO`](https://www.cockroachlabs.com/docs/v22.1/) syntax to take backups is deprecated, and will be removed in a future release. Create a backup collection using the `BACKUP INTO` syntax. [#78250][#78250]
-- Users can no longer define the subdirectory of their full backup. This deprecated syntax can be enabled by changing the new `bulkio.backup.deprecated_full_backup_with_subdir` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to `true`. [#80145][#80145]
-- `SHOW BACKUP` without the `IN` keyword to specify a subdirectory is deprecated and will be removed from a future release. Users are recommended to only create collection based backups and view them with `SHOW BACKUP FROM IN `. [#79116][#79116]
-- Using the [`RESTORE FROM`](https://www.cockroachlabs.com/docs/v22.1/restore) syntax without an explicit subdirectory pointing to a backup in a collection is deprecated, and will be removed in a future release. Use `RESTORE FROM IN ` to restore a particular backup in a collection. [#78250][#78250]
-
-
Known limitations
-
-For information about new and unresolved limitations in CockroachDB v22.1, with suggested workarounds where applicable, see [Known Limitations](https://www.cockroachlabs.com/docs/v22.1/known-limitations).
-
-
Education
-
-| Area | Topic | Description |
----------------------+---------------------------+------------
-| Cockroach University | New Java Course | [Event-Driven Architecture for Java Developers](https://university.cockroachlabs.com/courses/course-v1:crl+event-driven-architecture-for-java-devs+self-paced/about) teaches you how to handle message queues by building the [transactional outbox pattern](https://www.cockroachlabs.com/blog/message-queuing-database-kafka/) into your application using CockroachDB's built-in Change Data Capture feature. |
-| Cockroach University | New SQL for Application Developers Courses | The new SQL for Application Developers skill path helps developers new to SQL learn how to model their application object relationships in a relational database and use transactions. Its first two courses, now available as a limited preview, are [Getting Started With SQL for Application Developers](https://university.cockroachlabs.com/courses/course-v1:crl+getting-started-with-sql+preview/about) and [Modeling Object Relationships in SQL](https://university.cockroachlabs.com/courses/course-v1:crl+modeling-object-relationships-in-sql+preview/about). |
-| Docs | CockroachDB Cloud Guidance | New docs on how to use the [Cloud API](https://www.cockroachlabs.com/docs/cockroachcloud/cloud-api) to programmatically manage the lifecycle of clusters within your organization, how to use the [`ccloud` command](https://www.cockroachlabs.com/docs/cockroachcloud/ccloud-get-started) to create, manage, and connect to CockroachDB Cloud clusters, and how to do performance benchmarking with a CockroachDB {{ site.data.products.serverless }} cluster. |
-| Docs | Improved SQL Guidance | New documentation on transaction guardrails via [limiting the number of rows written or read in a transaction](https://www.cockroachlabs.com/docs/v22.1/transactions#limit-the-number-of-rows-written-or-read-in-a-transaction) and improved content on the use of indexes [in performance recipes](https://www.cockroachlabs.com/docs/v22.1/performance-recipes)and [secondary indexes](https://www.cockroachlabs.com/docs/v22.1/schema-design-indexes). |
-| Docs | New ORM tutorials and sample apps for CockroachDB {{ site.data.products.serverless }} | Tutorials for [AWS Lambda](https://www.cockroachlabs.com/docs/v22.1/deploy-lambda-function), [Knex.JS](https://www.cockroachlabs.com/docs/v22.1/build-a-nodejs-app-with-cockroachdb-knexjs), [Prisma](https://www.cockroachlabs.com/docs/v22.1/build-a-nodejs-app-with-cockroachdb-prisma), [Netlify](https://www.cockroachlabs.com/docs/v22.1/deploy-app-netlify), and [Vercel](https://www.cockroachlabs.com/docs/v22.1/deploy-app-vercel). |
-| Docs | Additional developer resources | Best practices for [serverless functions](https://www.cockroachlabs.com/docs/v22.1/serverless-function-best-practices) and [testing/CI environments](https://www.cockroachlabs.com/docs/v22.1/local-testing), and a new [client connection reference](https://www.cockroachlabs.com/docs/v22.1/connect-to-the-database) page with CockroachDB {{ site.data.products.serverless }}, Dedicated, and Self-Hosted connection strings for fully-supported drivers/ORMs. |
-| Docs | Security doc improvements | We have restructured and improved the Security section, including [supported authentication methods](https://www.cockroachlabs.com/docs/v22.1/security-reference/authentication#currently-supported-authentication-methods). |
-| Docs | Content overhauls | [Stream Data (Changefeeds)](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) and [Performance](https://www.cockroachlabs.com/docs/v22.1/make-queries-fast) docs have also been restructured and improved. |
-| Docs | Improved release notes | Release notes (_What's New?_ pages) are now compiled to one page per major version. |
-| Docs | New Glossary | The new [Glossary](https://www.cockroachlabs.com/docs/v22.1/architecture/glossary) page under the Get Started section of the docs compiles two existing glossaries and includes additional definitions for terms commonly found within the docs. |
-| Docs | New Nav | The new navigation menu structure for the docs better classifies types of user tasks. |
diff --git a/src/current/_includes/releases/v22.1/v22.1.1.md b/src/current/_includes/releases/v22.1/v22.1.1.md
deleted file mode 100644
index 2f5cd27ada0..00000000000
--- a/src/current/_includes/releases/v22.1/v22.1.1.md
+++ /dev/null
@@ -1,194 +0,0 @@
-## v22.1.1
-
-Release Date: June 6, 2022
-
-{% include releases/release-downloads-docker-image.md release=include.release %}
-
-
Security updates
-
-- The `crdb_internal.reset_sql_stats()` and `crdb_internal.reset_index_usage_stats()` [built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) now check whether the user has the [`admin` role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role). [#80278][#80278]
-
-
General changes
-
-- When using Azure Storage for data operations, CockroachDB now calculates the storage account URL from the provided `AZURE_ENVIRONMENT` query parameter. If not specified, this defaults to `AzurePublicCloud` to maintain backward compatibility. This parameter should **not** be used when the cluster is in a mixed-version or upgrading state, as nodes that have not been upgraded will continue to send requests to `AzurePublicCloud` even in the presence of this parameter. [#80801][#80801]
-
-
Enterprise edition changes
-
-- Previously, backups in the base directory of a Google Cloud Storage bucket would not be discovered by [`SHOW BACKUPS`](https://www.cockroachlabs.com/docs/v22.1/show-backup). These backups will now appear correctly. [#80493][#80493]
-- [Changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) to Google Cloud Platform no longer require topic creation permission if all topics being written to already exist. [#81684][#81684]
-
-
SQL language changes
-
-- `ttl_job_cron` is now displayed on [`SHOW CREATE TABLE`](https://www.cockroachlabs.com/docs/v22.1/show-create) and the specified `reloptions` by default. [#80292][#80292]
-- Added the `crdb_internal.cluster_locks` virtual table, which exposes the current state of locks on keys tracked by concurrency control. The virtual table displays metadata on locks currently held by transactions, as well as operations waiting to obtain the locks, and as such can be used to visualize active contention. The `VIEWACTIVITY` or `VIEWACTIVITYREDACTED` [role option](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#role-options), or the [`admin` role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role), is required to access the virtual table; however, if the user only has the `VIEWACTIVITYREDACTED` role option, the key on which a lock is held will be redacted. [#80517][#80517]
-- [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup), [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import), and [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore) jobs will be paused instead of entering a failed state if they continue to encounter transient errors once they have retried a maximum number of times. The user is responsible for cancelling or resuming the job from this state. [#80434][#80434]
-- Added a `sql.conn.failures` counter metric that shows the number of failed SQL connections. [#80987][#80987]
-- Constraints that only include hidden columns are no longer excluded in [`SHOW CONSTRAINTS`](https://www.cockroachlabs.com/docs/v22.1/show-constraints). This behavior can be changed using the `show_primary_key_constraint_on_not_visible_columns` session variable. [#80637][#80637]
-- Added a `sql.txn.contended.count` metric that exposes the total number of transactions that experienced [contentions](https://www.cockroachlabs.com/docs/v22.1/transactions#transaction-contention). [#81070][#81070]
-- Automatic statistics collection can now be [enabled or disabled for individual tables](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#enable-and-disable-automatic-statistics-collection-for-tables), taking precedence over the `sql.stats.automatic_collection.enabled`, `sql.stats.automatic_collection.fraction_stale_rows`, or `sql.stats.automatic_collection.min_stale_rows` [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings). The table settings may be set at table creation time, or later via [`ALTER TABLE ... SET`](https://www.cockroachlabs.com/docs/v22.1/alter-table). Note that any row mutations which occurred a minute or two before disabling automatic statistics collection via `ALTER TABLE ... SET` may trigger statistics collection, but DML statements submitted after the setting change will not. [#81019][#81019]
-- Added a new session variable, `enable_multiple_modifications_of_table`, which can be used instead of the cluster variable `sql.multiple_modifications_of_table.enabled` to allow statements containing multiple [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v22.1/insert), [`UPSERT`](https://www.cockroachlabs.com/docs/v22.1/upsert), [`UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update), or [`DELETE`](https://www.cockroachlabs.com/docs/v22.1/delete) subqueries modifying the same table. As with `sql.multiple_modifications_of_table.enabled`, be warned that with this session variable enabled, there is nothing to prevent the table corruption seen in issue [#70731](https://github.com/cockroachdb/cockroach/issues/70731) from occuring if the same row is modified multiple times by different subqueries of a single statment. It is best to rewrite these statements, but the session variable is provided as an aid if this is not possible. [#79930][#79930]
-- Fixed a small typo when using `DateStyle` and `IntervalStyle`. [#81550][#81550]
-- Added an `is_grantable` column to [`SHOW GRANTS FOR {role}`](https://www.cockroachlabs.com/docs/v22.1/show-grants) for consistency with other `SHOW GRANTS` commands. [#81820][#81820]
-- Improved query performance for `crdb_internal.cluster_locks` when issued with constraints in the `WHERE` clause on `table_id`, `database_name`, or `table_name` columns. [#81261][#81261]
-
-
Operational changes
-
-- The default value for `storage.max_sync_duration` has been lowered from `60s` to `20s`. CockroachDB will now exit sooner with a fatal error if a single slow disk operation exceeds this value. [#81496][#81496]
-- The [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-zip) and [`cockroach debug merge-logs`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-merge-logs) commands will now work with [JSON-formatted logs](https://www.cockroachlabs.com/docs/v22.1/log-formats#format-json). [#81469][#81469]
-
-
Command-line changes
-
-- The standalone SQL shell executable `cockroach-sql` can now be installed (renamed/symlinked) as `cockroach`, and invoked via `cockroach sql`. For example, the following commands are all equivalent: `cockroach-sql -f foo.sql`, `cockroach-sql sql -f foo.sql`; and after running `ln -s cockroach-sql cockroach`, `cockroach sql -f foo.sql`. [#80930][#80930]
-- Added a new flag `--advertise-http-addr`, which explicitly sets the HTTP advertise address that is used to display the URL for [DB Console access](https://www.cockroachlabs.com/docs/v22.1/ui-overview#db-console-access) and for proxying HTTP connections between nodes as described in [#73285](https://github.com/cockroachdb/cockroach/issues/73285). It may be necessary to set `--advertise-http-addr` in order for these features to work correctly in some deployments. Previously, the HTTP advertise address was derived from the OS hostname, the `--advertise-addr`, and the `--http-addr` flags, in that order. The new logic will override the HTTP advertise host with the host from `--advertise-addr` first if set, and then the host from `--http-addr`. The port will **never** be inherited from `--advertise-host` and will only be inherited from `--http-addr`, which is `8080` by default. [#81316][#81316]
-- If [node decommissioning](https://www.cockroachlabs.com/docs/v22.1/node-shutdown?filters=decommission) is slow or stalls, the descriptions of some "stuck" replicas are now printed to the operator. [#79516][#79516]
-- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v22.1/cockroach-debug-zip) now includes system tables using a denylist instead of an allowlist. [#81383][#81383]
-
-
DB Console changes
-
-- Added more job types to the **Type** filter on the [Jobs page](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page#filter-jobs). [#80128][#80128]
-- Added a dropdown filter on the [Node Diagnostics page](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages#even-more-advanced-debugging) to view by **Active**, **Decomissioned**, or **All** nodes. [#80320][#80320]
-- The custom selection in the time picker on the Metrics dashboards, [SQL Activity page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page), and other DB Console pages now defaults to the currently selected time. [#80794][#80794]
-- Updated all dates to use 24h format in UTC. [#81747][#81747]
-- Fixed the size of the table area on the [Statements](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages to prevent cutting off the columns selector and filters. [#81746][#81746]
-- The [Job status](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page#job-status) on the Jobs page of the DB Console will now show a status column for [changefeed](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) jobs and display the `highwater_timestamp` value in a separate column. Thise more closely matches the SQL output of [`SHOW CHANGEFEED JOBS`](https://www.cockroachlabs.com/docs/v22.1/show-jobs). The highwater timestamp now displays as the nanosecond system time value by default, with the human-readable value in the tooltip, since the decimal value is copy/pasted more often. [#81757][#81757]
-- The tooltip for a Session's status on the [Sessions page](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page) has been updated with a more explicit definition: `A session is Active if it has an open explicit or implicit transaction (individual SQL statement) with a statement that is actively running or waiting to acquire a lock. A session is Idle if it is not executing a statement.` [#81904][#81904]
-
-
Bug fixes
-
-- Previously, CockroachDB could lose the `INT2VECTOR` and `OIDVECTOR` type of some arrays. This is now fixed. [#78581][#78581]
-- Previously, CockroachDB could encounter an internal error when evaluating queries with `OFFSET` and `LIMIT` clauses when the addition of the `offset` and the `limit` value would be larger than `int64` range. This is now fixed. [#79878][#79878]
-- Previously, a custom time-series metric `sql.distsql.queries.spilled` was computed incorrectly, leading to an exaggerated number. This is now fixed. [#79882][#79882]
-- Fixed a bug where `NaN` coordinates when using `ST_Intersects`/`ST_Within`/`ST_Covers` would return `true` instead of `false` for point-in-polygon operations. [#80202][#80202]
-- Added a detailed error message for index out of bounds when decoding a binary tuple datum. This does not fix the root cause, but should give more insight into what is happening. [#79933][#79933]
-- Fixed a bug where `ST_MinimumBoundingCircle` would panic with infinite coordinates and a `num_segments` argument. [#80347][#80347]
-- Addressed an issue where automatic encryption-at-rest data key rotation would be disabled after a node restart without a store key rotation. [#80563][#80563]
-- Fixed the formatting/printing behavior for [`ALTER DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v22.1/alter-default-privileges), which will correct some mistaken error messages. [#80327][#80327]
-- Fixed a bug whereby the cluster version could regress due to a race condition. [#80711][#80711]
-- Fixed a rare crash which could occur when restarting a node after dropping tables. [#80572][#80572]
-- Previously, in very rare circumstances, CockroachDB could incorrectly evaluate queries with an `ORDER BY` clause when the prefix of ordering was already provided by the index ordering of the scanned table. [#80714][#80714]
-- Index recommendations are no longer presented for system tables in the output of [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) statements. [#80952][#80952]
-- Fixed a goroutine leak when internal rangefeed clients received certain kinds of retriable errors. [#80798][#80798]
-- Fixed a bug that allowed duplicate constraint names for the same table if the constraints were on hidden columns. [#80637][#80637]
-- Errors encountered when sending rebalancing hints to the [storage layer](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer) during [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import)s and index creation are now only logged, and no longer cause the job to fail. [#80469][#80469]
-- Fixed a bug where if a transaction's commit time is pushed forward from its initial provisional time, an enclosing [`CREATE MATERIALIZED VIEW AS ...`](https://www.cockroachlabs.com/docs/v22.1/create-view) might fail to find other descriptors created in the same transaction during the view's backfill stage. The detailed descriptor of this bug is summarized in issue [#79015](https://github.com/cockroachdb/cockroach/issues/79015). [#80908][#80908]
-- Contention statistics are now collected for SQL statistics when tracing is enabled. [#81070][#81070]
-- Fixed a bug in [row-level TTL](https://www.cockroachlabs.com/docs/v22.1/row-level-ttl) where the last range key of a table may overlap with a separate table or index, resulting in an `error decoding X bytes` error message when performing row-level TTL. [#81262][#81262]
-- Fixed a bug where `format_type` on the `void` type resulted in an error. [#81323][#81323]
-- Fixed a bug in which some prepared statements could result in incorrect results when executed. This could occur when the prepared statement included an equality comparison between an index column and a placeholder, and the placholder was cast to a type that was different from the column type. For example, if column a was of type `DECIMAL`, the following prepared query could produce incorrect results when executed: `SELECT * FROM t_dec WHERE a = $1::INT8;` [#81345][#81345]
-- Fixed a bug where `ST_MinimumBoundingCircle` with `NaN` coordinates could panic. [#81462][#81462]
-- Fixed a panic that was caused by setting the `tracing` session variable using [`SET LOCAL`](https://www.cockroachlabs.com/docs/v22.1/set-vars) or [`ALTER ROLE ... SET`](https://www.cockroachlabs.com/docs/v22.1/alter-role). [#81505][#81505]
-- Fixed a bug where [`GRANT ALL TABLES IN SCHEMA`](https://www.cockroachlabs.com/docs/v22.1/grant) would not resolve the correct database name if it was explicitly specified. [#81553][#81553]
-- Previously, cancelling `COPY` commands would show an `XXUUU` error, instead of `57014`. This is now fixed. [#81595][#81595]
-- Fixed a bug that caused errors with the message `unable to vectorize execution plan: unhandled expression type` in rare cases. This bug had been present since v21.2.0. [#81591][#81591]
-- Fixed a bug where [changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) could fail permanently if encountering an error while planning their distribution, even though such errors are usually transient. [#81685][#81685]
-- Fixed a gap in disk-stall detection. Previously, disk stalls during filesystem metadata operations could go undetected, inducing deadlocks. Now stalls during these types of operations will correctly fatal the process. [#81752][#81752]
-- Fixed an issue where the `encryptionStatus` field on the [**Stores** debug page](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages) of the DB Console would display an error instead of displaying encryption details when encryption-at-rest is enabled. [#81500][#81500]
-- In v21.1, a bug was introduced whereby default values were recomputed when populating data in new secondary indexes for columns which were added in the same transaction as the index. This would arise, for example, in cases like `ALTER TABLE t ADD COLUMN f FLOAT8 UNIQUE DEFAULT (random())`. If the default expression was not volatile, then the recomputation was harmless. If, however, the default expression was volatile, the data in the secondary index would not match the data in the primary index: a corrupt index would have been created. This bug has now been fixed. [#81549][#81549]
-- Previously, when running [`ALTER DEFAULT PRIVILEGES IN SCHEMA {virtual schema}`](https://www.cockroachlabs.com/docs/v22.1/alter-schema), a panic occured. This now returns the error message `{virtual schema} is not a physical schema`. [#81704][#81704]
-- Previously, CockroachDB would encounter an internal error when executing queries with `lead` or `lag` window functions when the default argument had a different type than the first argument. This is now fixed. [#81756][#81756]
-- Fixed an issue where a left lookup join could have incorrect results. In particular, some output rows could have non-`NULL` values for right-side columns when the right-side columns should have been `NULL`. This issue only existed in v22.1.0 and prior development releases of v22.1. [#82076][#82076]
-- The [Statements](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages no longer crash when a search term includes `*`. [#82085][#82085]
-- The special characters `*` and `^` are no longer highlighted when searching on the [Statements](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages. [#82085][#82085]
-- Previously, if materialized view creation failed during the backfill stage, CockroachDB would properly clean up the view but not any of the back references. Back and forward references for materialized views are now cleaned up. [#82099][#82099]
-- Fix a bug where `\copy` in the CLI would panic. [#82197][#82197]
-- Fixed a bug introduced in v21.2 where the `sql-stats-compaction` job had a chance of not being scheduled during an upgrade from v21.1 to v21.2, causing persisted statement and transaction statistics to be enabled without memory accounting. [#82283][#82283]
-- Fixed an edge case where `VALUES` clauses with nested tuples could fail to be type-checked properly in rare cases. [#82298][#82298]
-- The [`CREATE SEQUENCE ... AS`](https://www.cockroachlabs.com/docs/v22.1/create-sequence) statement now returns a valid error message when the specified type name does not exist. [#82322][#82322]
-- The [`SHOW STATISTICS`](https://www.cockroachlabs.com/docs/v22.1/show-statistics) output no longer displays statistics involving dropped columns. [#82315][#82315]
-- Fixed a bug where [changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) created before upgrading to v22.1 would silently fail to emit any data other than resolved timestamps. [#82312][#82312]
-- Fixed a rare crash indicating a nil-pointer deference in `google.golang.org/grpc/internal/transport.(*Stream).Context(...)`. [#80911][#80911]
-
-
Performance improvements
-
-- Bulk ingestion of unsorted data during [`IMPORT`](https://www.cockroachlabs.com/docs/v22.1/import) and schema changes uses a higher level of parallelism to send produced data to the storage layer. [#80386][#80386]
-
-
Docker
-
-- Refactored the initialization process of the Docker image to accomodate initialization scripts with memory storage. [#80355][#80355]
-
-
-
-
Contributors
-
-This release includes 183 merged PRs by 55 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Nathan Lowe (first-time contributor)
-
-
-
-- Added three [cluster settings](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) in [#89774][#89774] to collect trace data for outlier executions with low overhead. This is only available in [v22.1](https://www.cockroachlabs.com/docs/releases/v22.1); in [v22.2]({% link releases/v22.2.md %}) and later we have other mechanisms to collect outlier traces. Traces come in handy when looking to investigate latency spikes, and these settings are intended to supplant most uses of `sql.trace.stmt.enable_threshold`. That setting enables verbose tracing for all statements with 100% probability which can cause a lot of overhead in production clusters, and also a lot of logging pressure. Instead we introduce the following:
- - `trace.fingerprint`
- - `trace.fingerprint.probability`
- - `trace.fingerprint.threshold`
-
- Put together (all have to be set) they only enable tracing for the statement with the set hex-encoded fingerprint, and do so probabilistically (where the probability is whatever `trace.fingerprint.probability` is set to), logging it only if the latency threshold is exceeded (configured using `trace.fingerprint.threshold`). To obtain a hex-encoded fingerprint, look at the contents of `system.statement_statistics`. For example:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SELECT encode(fingerprint_id, 'hex'), (statistics -> 'statistics' ->> 'cnt')::INT AS count, metadata ->> 'query' AS query FROM system.statement_statistics ORDER BY COUNT DESC limit 10;
- ~~~
-
- ~~~
- encode | count | query
- -----------------+-------+--------------------------------------------------------------------------------------------------------------------
- 4e4214880f87d799 | 2680 | INSERT INTO history(h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_amount, h_date, h_data) VALUES ($1, $2, __more6__)
- ~~~
-
-
Bug fixes
-
-- The [Statements page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page), [Transactions page](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page), and [Transaction Details page](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page#transaction-details-page) in the [DB console](https://www.cockroachlabs.com/docs/v22.1/ui-overview) now properly show the **Regions** and **Nodes** columns and filters for [multi-region clusters](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview). [#89818][#89818]
-- Fixed a bug which caused [`ALTER CHANGEFEED`](https://www.cockroachlabs.com/docs/v22.1/alter-changefeed) to fail if the changefeed was created with a cursor option and had been running for more than [`gc.ttlseconds`](https://www.cockroachlabs.com/docs/v22.1/configure-replication-zones#gc-ttlseconds). [#89399][#89399]
-- Fixed a bug that caused internal errors in rare cases when running [common table expressions](https://www.cockroachlabs.com/docs/v22.1/common-table-expressions) (a.k.a. CTEs, or statements with `WITH` clauses). This bug was only present in [v22.2.0-beta.2](https://www.cockroachlabs.com/docs/releases/v22.2#v22-2-0-beta-2), [v22.2.0-beta.3]({% link releases/v22.2.md%}#v22-2-0-beta-3), [v21.2.16]({% link releases/v21.2.md %}#v21-2-16), and [v22.1.9]({% link releases/v22.1.md %}#v22-1-9). [#89854][#89854]
-- Fixed a bug where it was possible for [leases](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#leases) to temporarily move outside of explicitly configured regions. This often happened during [load-based rebalancing](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#load-based-replica-rebalancing), something CockroachDB does continually across the cluster. Because of this, it was also possible to observe a continual rate of lease thrashing as leases moved out of configured zones, triggered rebalancing, and induced other leases to move out of the configured zone while the original set moved back, and so on. [#90013][#90013]
-- Excluded [check constraints](https://www.cockroachlabs.com/docs/v22.1/check) of [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) from being invalidated when executing [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v22.1/import-into). [#89528][#89528]
-- Fixed overlapping charts on the [Statement Details page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page). [#90090][#90090]
-- `initial_scan_only` [changefeeds](https://www.cockroachlabs.com/docs/v22.1/create-changefeed#initial-scan) now ensure that all messages have successfully flushed to the sink prior to completion, instead of potentially missing messages. [#90293][#90293]
-- Fixed a bug introduced in [v22.1.9]({% link releases/v22.1.md %}#v22-1-9) that caused nodes to refuse to run [jobs](https://www.cockroachlabs.com/docs/v22.1/show-jobs) under rare circumstances. [#90265][#90265]
-- Fixed a bug that caused incorrect evaluation of [comparison expressions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#comparison-functions) involving [`TIME`](https://www.cockroachlabs.com/docs/v22.1/time) and [`INTERVAL`](https://www.cockroachlabs.com/docs/v22.1/interval) types, e.g., `col::TIME + '10 hrs'::INTERVAL' > '01:00'::TIME`. [#90370][#90370]
-
-
-
-- HTTP API endpoints under the `/api/v2/` prefix now allow requests through when the cluster is running in insecure mode. When the cluster is running in insecure mode, requests to these endpoints will have the username set to `root`. [#87274][#87274]
-
-
SQL language changes
-
-- Added a new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `cloudstorage.azure.concurrent_upload_buffers` to configure the number of concurrent buffers used when uploading files to Azure. [#90449][#90449]
-
-
DB Console changes
-
-- Requests to fetch table and database [statistics](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#table-statistics) now have limited concurrency. This may make loading the [Databases page](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page) slower, but in return should result in making those pages less disruptive. [#90575][#90575]
-- Updated the filter labels from **App** to **Application Name** and from **Username** to **User Name** on the [SQL Activity page](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity). [#91294][#91294]
-- Fixed the filter and label style on the **Transactions** filter label on the [SQL Activity page](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity). [#91319][#91319]
-- Fixed the filters in the DB Console so that if the height of the filter is large, it will allow a scroll to reach **Apply**. [#90479][#90479]
-- Added a horizontal scroll to the table on the **Explain Plan** tab under [**Statement Details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page). [#91329][#91329]
-- Fixed the filter height on the [Sessions page](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page) so that the full dropdown is viewable with scroll. [#91325][#91325]
-
-
Bug fixes
-
-- Fixed an extremely rare out-of-bounds crash in the [protected timestamp](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#protected-timestamps) subsystem. [#90452][#90452]
-- Fixed the calculation of the `pg_attribute.attnum` column for indexes so that the `attnum` is always based on the order the column appears in the index. Also fixed the `pg_attribute` table so that it includes stored columns in secondary indexes. [#90728][#90728]
-- [TTL job](https://www.cockroachlabs.com/docs/v22.1/row-level-ttl#view-scheduled-ttl-jobs) decoding error messages now correctly contain hex-encoded key bytes instead of hex-encoded key pretty-print output. [#90727][#90727]
-- Fixed a bug where CockroachDB clusters running inside of a Docker container on macOS, when mounting a host filesystem into the container, could report the total available capacity calculation of the filesystem incorrectly. [#90868][#90868]
-- Fixed the error `invalid uvarint length of 9` that could occur during TTL jobs. This bug could affect keys with secondary tenant prefixes, which affects CockroachDB {{ site.data.products.serverless }} clusters. [#90606][#90606]
-- Previously, if a primary key name was a reserved SQL keyword, attempting to use the [`DROP CONSTRAINT, ADD CONSTRAINT`](https://www.cockroachlabs.com/docs/v22.1/drop-constraint#drop-and-add-a-primary-key-constraint) statements to change a primary key would result in a `constraint already exists` error. This is now fixed. [#91041][#91041]
-- Fixed a bug where in large [multi-region clusters](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview), it was possible for the leasing mechanism used for jobs to get caught in a live-lock scenario, which could result in jobs not being adopted. [#91066][#91066]
-- Fixed a bug that caused incorrect results and internal errors when a [`LEFT JOIN`](https://www.cockroachlabs.com/docs/v22.1/joins) operated on a table with [virtual computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns). The bug only presented when the optimizer planned a "paired joiner". Only values of the virtual columns would be incorrect—they could be `NULL` when their correct value was not `NULL`. An internal error would occur in the same situation if the virtual column had a `NOT NULL` constraint. This bug has been present since v22.1.0. [#91017][#91017]
-
-
Performance improvements
-
-- Loading the Databases page in the UI is now less expensive when there are a large number of databases and a large number of tables in each database and a large number of ranges in the cluster. [#91014][#91014]
-
-
-
-- [Kafka sinks](https://www.cockroachlabs.com/docs/v22.1/changefeed-sinks#kafka-sink-configuration) can now (optionally) be configured with a "Compression" field to the `kafka_sink_config` option. This field can be set to `none` (default), `GZIP`, `SNAPPY`, `LZ4`, or `ZSTD`. Setting this field will determine the compression protocol used when emitting events. [#91276][#91276]
-
-
Operational changes
-
-- Logs produced by setting an increased vmodule setting for s3_storage are now directed to the DEV channel rather than STDOUT. [#91960][#91960]
-
-- Introduced a metric (`replicas.leaders_invalid_lease`) that indicates how many replicas are Raft group leaders but holding invalid leases. [#91194][#91194]
-
-
DB Console changes
-
-- Changed the height of the column selector, so it can hint there are more options to be selected once scrolled. [#91910][#91910]
-- Added fingerprint ID in hex format to the [Statement Details](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) page and [Transaction Details](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page#transaction-details-page) page. [#91959][#91959]
-- Updated the tooltip on `SQL Statement Errors` chart on Metrics page. [#92711][#92711]
-
-
Bug fixes
-
-- Fixed a bug in which panics triggered by certain DDL statements were not properly recovered, leading to the cluster node crashing. [#91555][#91555]
-- Fixed a panic that could occur when calling `st_distancespheroid` or `st_distancesphere` with a spatial object containing an NaN coordinate. This now produces an error, `input is out of range`. [#91634][#91634]
-- Fixed a bug that resulted in some retriable errors not being retried during `IMPORT`. [#90432][#90432]
-- Fixed a bug in `Concat` projection operators for arrays that could cause non-null values to be added to the array when one of the arguments was null. [#91653][#91653]
-- Previously, `SET DEFAULT NULL` resulted in a column whose DefaultExpr is NULL. This is problematic when used with `ALTER COLUMN TYPE` where a temporary computed column will be created, hence violating validation that "a computed column cannot have default expression". This is now fixed by setting `DefaultExpr` to `nil` when `SET DEFAULT NULL`. [#91089][#91089]
-- Fixed a bug introduced in "v21.2" that could cause an internal error in rare cases when a query required a constrained index scan to return results in order. [#91692][#91692]
-- Fixed a bug which, in rare cases, could result in a [changefeed](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) missing rows occurring around the time of a split in writing transactions that take longer than the closed timestamp target duration (defaults to 3s). [#91749][#91749]
-- Added leading zeros to fingerprint IDs with less than 16 characters. [#91959][#91959]
-- Fixed a bug introduced in "v20.2" that could in rare cases cause filters to be dropped from a query plan with many joins. [#91654][#91654]
-- Fixed an unhandled error that could happen if [`ALTER DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v22.1/alter-default-privileges) was run on the system database. [#92083][#92083]
-- Reduced the amount that `RESTORE` over-splits ranges. This is enabled by default. [#91141][#91141]
-- Fixed a bug causing changefeeds to fail when a value is deleted while running on a non-primary [column family with multiple columns](https://www.cockroachlabs.com/docs/v22.1/changefeeds-on-tables-with-column-families). [#91953][#91953]
-- Stripped quotation marks from database and table names to correctly query for index usage statistics. [#92282][#92282]
-- Fixed the [statement activity](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-fingerprint-page) page so that it no longer shows multi-statement implicit transactions as "explicit." [#92430][#92430]
-- Fixed a bug existing since "v20.2" that could cause incorrect results in rare cases for queries with inner joins and left joins. For the bug to occur, the left join had to be in the input of the inner join and the inner join filters had to reference both inputs of the left join, and not filter `NULL` values from the right input of the left join. Additionally, the right input of the left join had to contain at least one join, with one input not referenced by the left join's `ON` condition. [#92103][#92103]
-- When configured to true, the `sql.metrics.statement_details.dump_to_logs` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) no longer causes a mutex deadlock. [#92278][#92278]
-- Fixed incorrect cancellation logic when attempting to detect stuck rangefeeds. [#92702][#92702]
-- Fixed an internal error when comparing a tuple type with a non-tuple type. [#92714][#92714]
-- `attidentity` for `GENERATED BY DEFAULT AS IDENTITY` column should be `d`. [#92835][#92835]
-- Previously, CockroachDB could incorrectly evaluate queries that performed left semi and left anti "virtual lookup" joins on tables in `pg_catalog` or `information_schema`. These join types can be planned when a subquery is used inside of a filter condition. The bug was introduced in v20.2.0 and is now fixed. [#92881][#92881]
-
-
Performance improvements
-
-- To protect against unexpected situations where garbage collection would trigger too frequently, the GC score cooldown period has been lowered. The GC score ratio is computed from MVCC stats and uses ratio of live objects and estimated garbage age to estimate collectability of old data. The reduced score will trigger GC earlier, lowering interval between runs 3 times, giving 2 times reduced peak garbage usage at the expense of 30% increase of wasteful data scanning on constantly updated data. [#92816][#92816]
-- CockroachDB in some cases now correctly incorporates the value of the `OFFSET` clause when determining the number of rows that need to be read when the `LIMIT` clause is also present. Note that there was no correctness issue here - only that extra unnecessary rows could be read. [#92839][#92839]
-- [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v22.1/show-backup) on a backup containing several table descriptors is now more performant. [#93143][#93143]
-
-
-
-
Contributors
-
-This release includes 75 merged PRs by 37 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- quanuw (first-time contributor)
-
-
-
-- Removed the feedback survey link from the DB Console. [#93278][#93278]
-- Improved the readability of the [metric graph](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard) tooltip styling by preventing the content from collapsing. [#93929][#93929]
-- Fixed a bug where a ts/query could return no data for graphs. This will now return data as the resolution has been adjusted to the sample size. [#93620][#93620]
-
-
Bug fixes
-
-- Fixed a bug that could manifest as [restore](https://www.cockroachlabs.com/docs/v22.1/restore) queries hanging during execution due to slow listing calls in the presence of several backup files. [#93224][#93224]
-- Fixed a bug where empty [`COPY`](https://www.cockroachlabs.com/docs/v22.1/copy-from) commands would not escape after an EOF character, or error if encountering `\.` with no input. [#93260][#93260]
-- Fixed a bug where running multiple schema change statements in a single command using a driver that uses the extended pgwire protocol internally ([Npgsql](https://www.npgsql.org/) in .Net as an example) could lead to the error: `"attempted to update job for mutation 2, but job already exists with mutation 1"`. [#92304][#92304]
-- Fixed a bug where the non-default [`NULLS` ordering](https://www.cockroachlabs.com/docs/v22.1/order-by), `NULLS LAST`, was ignored in [window](https://www.cockroachlabs.com/docs/v22.1/window-functions) and [aggregate](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#aggregate-functions) functions. This bug could cause incorrect query results when `NULLS LAST` was used. This bug had been introduced in v22.1.0. [#93600][#93600]
-- Fixed an issue where `DISTINCT ON` queries would fail with the error `"SELECT DISTINCT ON expressions must match initial ORDER BY expressions"` when the query included an [`ORDER BY`](https://www.cockroachlabs.com/docs/v22.1/order-by) clause containing `ASC NULLS LAST` or `DESC NULLS FIRST`. [#93608][#93608]
-- Previously, CockroachDB would error when receiving [`GEOMETRY` or `GEOGRAPHY`](https://www.cockroachlabs.com/docs/v22.1/spatial-glossary#data-types) types using binary parameters. This is now resolved. [#93686][#93686]
-- Fixed a bug where the `session_id` [session variable](https://www.cockroachlabs.com/docs/v22.1/show-vars) would not be properly set if used from a subquery. [#93857][#93857]
-- Server logs are now correctly fsynced at every syncInterval. [#93994][#93994]
-- [`CREATE ROLE`](https://www.cockroachlabs.com/docs/v22.1/create-role), [`DROP ROLE`](https://www.cockroachlabs.com/docs/v22.1/drop-role), [`GRANT`](https://www.cockroachlabs.com/docs/v22.1/grant), and [`REVOKE`](https://www.cockroachlabs.com/docs/v22.1/revoke) statements no longer work when the transaction is in read-only mode. [#94104][#94104]
-- The `stxnamespace`, `stxkind`, and `stxstattarget` columns are now defined in the [`pg_statistics_ext` system catalog](https://www.cockroachlabs.com/docs/v22.1/pg-catalog). [#94008][#94008]
-- Fixed a bug where tables that receive writes concurrent with portions of an [`ALTER TABLE ... SET LOCALITY REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v22.1/set-locality) statement could fail with the error: `duplicate key value violates unique constraint "new_primary_key"`. This had been introduced in v22.1. [#94252][#94252]
-- Previously, CockroachDB could encounter an internal error when evaluating [window functions](https://www.cockroachlabs.com/docs/v22.1/window-functions) with a `RANGE` window frame mode with an `OFFSET PRECEDING` or `OFFSET FOLLOWING` boundary when an `ORDER BY` clause has a `NULLS LAST` option. This will now result in a regular error since the feature is marked as unsupported. [#94351][#94351]
-- Record types can now be encoded with the binary encoding of the PostgreSQL wire protocol. Previously, trying to use this encoding could cause a panic. [#94420][#94420]
-- Fixed a bug that caused incorrect selectivity estimation for queries with ORed predicates all referencing a common single table. [#94439][#94439]
-
-
Performance improvements
-
-- Improved the performance of [`crdb_internal.default_privileges`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) population. [#94338][#94338]
-
-
-
-- [`COPY`](https://www.cockroachlabs.com/docs/v22.1/copy-from) now logs an error during the insert phase on the [`SQL_EXEC`](https://www.cockroachlabs.com/docs/v22.1/logging#sql_exec) logging channel. [#95175][#95175]
-- If `copy_from_retries_enabled` is set, [`COPY`](https://www.cockroachlabs.com/docs/v22.1/copy-from) is now able to retry under certain safe circumstances. For example, when `copy_from_atomic_enabled` is false, there is no transaction running `COPY` and the error returned is retriable. [#95505][#95505]
-- `kv.bulkio.write_metadata_sst.enabled` now defaults to false. This change does not affect [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup) or [`RESTORE`](https://www.cockroachlabs.com/docs/v22.1/restore). [#96017][#96017]
-
-
DB Console changes
-
-- Removed the [**Reset SQL stats**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Reset index stats**](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page#index-details) buttons from the DB Console for non-admin users. [#95325][#95325]
-- [Graphs](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard) can now be clicked on to toggle legend "stickiness" and make the points stop following the mouse. This makes it easier to read dense graphs with many series plotted together. [#94786][#94786]
-
-
Bug fixes
-
-- Fixed a bug where, in a cluster with nodes running both [v22.2]({% link releases/v22.2.md %}) and v22.1, [range replica](https://www.cockroachlabs.com/docs/v22.1/ui-replication-dashboard#review-of-cockroachdb-terminology) changes could sometimes fail on v22.1 leaseholders with the error `change replicas of r47 failed: descriptor changed: [expected] != [actual]`, without any apparent differences between the listed descriptors. Continuing to upgrade all nodes to v22.2 or rolling all nodes back to v22.1 would resolve this issue. [#94841][#94841]
-- It is now possible to run [`cockroach version`](https://www.cockroachlabs.com/docs/v22.2/cockroach-version) and [`cockroach start`](https://www.cockroachlabs.com/docs/v22.2/cockroach-start) (and possibly other sub-commands) when the user running the command does not have permission to access the current working directory. [#94926][#94926]
-- Fixed a bug where [`CLOSE ALL`](https://www.cockroachlabs.com/docs/v22.1/sql-grammar#close_cursor_stmt) would not respect the `ALL` flag and would instead attempt to close a cursor with no name. [#95440][#95440]
-- Fixed a crash that could happen when formatting a tuple with an unknown type. [#95422][#95422]
-- Fixed a bug where a [database restore](https://www.cockroachlabs.com/docs/v22.1/restore) would not [grant](https://www.cockroachlabs.com/docs/v22.1/grant) `CREATE` and `USAGE` on the public schema to the public [role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#users-and-roles). [#95537][#95537]
-- Fixed a bug where [`pg_get_indexdef`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) didn't show the expression used to define an [expression-based index](https://www.cockroachlabs.com/docs/v22.1/partial-indexes), as well as a bug where the function was incorrectly including columns stored by the index. [#95585][#95585]
-- Fixed a bug where a DNS lookup was performed during gossip remote forwarding while holding the gossip mutex. This could cause processing stalls if the DNS server was slow to respond. [#95441][#95441]
-- Fixed a bug where [`RESTORE SYSTEM USERS`](https://www.cockroachlabs.com/docs/v22.1/restore#restoring-users-from-system-users-backup) would fail to restore [role options](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#role-options). [#95295][#95295]
-- Reduce contention between queries to register, deregister, and cancel [sessions](https://www.cockroachlabs.com/docs/v22.1/show-sessions). [#95654][#95654]
-- Fixed a bug where a [backup](https://www.cockroachlabs.com/docs/v22.1/backup) of keys with many revisions would fail with `pebble: keys must be added in order`. [#95446][#95446]
-- Fixed the `array_to_string` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) so that nested arrays are traversed without printing [`ARRAY`](https://www.cockroachlabs.com/docs/v22.1/array) at each nesting level. [#95844][#95844]
-- Fixed a bug that caused [ranges](https://www.cockroachlabs.com/docs/v22.1/architecture/overview#architecture-range) to remain without a leaseholder in cases of asymmetric [network partitions](https://www.cockroachlabs.com/docs/v22.1/cluster-setup-troubleshooting#network-partition). [#95237][#95237]
-- Fixed a bug where [`COPY`](https://www.cockroachlabs.com/docs/v22.1/copy-from) into a column with [collated strings](https://www.cockroachlabs.com/docs/v22.1/collate) would result in an error similar to `internal error: unknown type collatedstring`. [#96039][#96039]
-- Fixed a bug where spurious [transaction restarts](https://www.cockroachlabs.com/docs/v22.1/common-errors#restart-transaction) could occur when validating a [`FOREIGN KEY`](https://www.cockroachlabs.com/docs/v22.1/foreign-key) in the same transaction where the referenced table is modified. If the transaction was running at [`PRIORITY HIGH`](https://www.cockroachlabs.com/docs/v22.1/transactions#transaction-priorities), deadlocks could occur. [#96124][#96124]
-
-
-
-
Contributors
-
-This release includes 36 merged PRs by 22 authors.
-
-
-
-- SQL queries running on remote nodes now show up in CPU profiles with `distsql.*` labels. Currently, these include `appname`, `gateway`, `txn`, and `stmt`. [#97055][#97055]
-
-
Bug fixes
-
-- Fixed a bug where a node with a [disk stall](https://www.cockroachlabs.com/docs/v22.1/cluster-setup-troubleshooting#disk-stalls) would continue to accept new connections and preserve existing connections until the disk was no longer stalled. [#96369][#96369]
-- Fixed a bug where a [disk stall](https://www.cockroachlabs.com/docs/v22.1/cluster-setup-troubleshooting#disk-stalls) could go undetected under the rare circumstance that several goroutines simultaneously sync the data directory. [#96666][#96666]
-- Fixed a race condition where some operations waiting on locks could cause the lockholder [transaction](https://www.cockroachlabs.com/docs/v22.1/transactions) to be aborted if they occurred before the transaction could write its record. [#95215][#95215]
-- Fixed a bug where the [**Statement Details** page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page) was unable to render. [#97057][#97057]
-
-
-
-
Contributors
-
-This release includes 21 merged PRs by 16 authors.
-
-
-
-- Added a hard limit to the amount of data that can be flushed to system tables for SQL stats. [#97401][#97401]
-
-
Operational changes
-
-- A [`BACKUP`](https://www.cockroachlabs.com/docs/v22.1/backup) which encounters too many retryable errors will now fail instead of pausing to allow subsequent backups the chance to succeed. [#96715][#96715]
-
-
Bug fixes
-
-- Fixed a bug in [Enterprise changefeeds](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) where long-running initial scans will fail to generate a checkpoint. Failure to generate a checkpoint is particularly bad if the changefeed restarts for any reason. Without checkpoints, the changefeed will restart from the beginning, and in the worst case, when exporting substantially sized tables, changefeed initial scans may have hard time completing. [#97052][#97052]
-- Fixed a bug where the [`SHOW GRANTS FOR public`](https://www.cockroachlabs.com/docs/v22.1/show-grants) command would return an error saying that the `public` role does not exist. [#96999][#96999]
-- The following spammy log message was removed: `> lease [...] expired before being followed by lease [...]; foreground traffic may have been impacted`. [#97378][#97378]
-- Fixed a bug in the query engine that could cause incorrect results in some cases when a [zigzag join](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#zigzag-joins) was planned. The bug could occur when the two indexes used for the zigzag join had a suffix of matching columns but with different directions. For example, planning a zigzag join with `INDEX(a ASC, b ASC)` and `INDEX(c ASC, b DESC)` could cause incorrect results. This bug has existed since at least [v19.1](https://www.cockroachlabs.com/docs/releases#v19-1). The optimizer will no longer plan a zigzag join in such cases. [#97440][#97440]
-- Added support for disabling cross-descriptor validation on lease renewal, which can be problematic when there are lots of descriptors with lots of foreign key references, in which cases, the cross-reference validation could starve schema changes. This can be enabled with `sql.catalog.descriptor_lease_renewal_cross_validation`. [#97644][#97644]
-- Columns referenced in partial index predicates and partial unique constraint predicates can no longer be dropped. The [`ALTER TABLE .. DROP COLUMN`](https://www.cockroachlabs.com/docs/v22.1/drop-column) statement now returns an error with a hint suggesting to drop the indexes and constraints first. This is a temporary safe-guard to prevent users from hitting [#96924][#96924]. This restriction will be lifted when that bug is fixed. [#97663][#97663]
-
-
-
-
Contributors
-
-This release includes 16 merged PRs by 12 authors.
-
-
-
-[#96924]: https://github.com/cockroachdb/cockroach/issues/96924
-[#96715]: https://github.com/cockroachdb/cockroach/pull/96715
-[#96999]: https://github.com/cockroachdb/cockroach/pull/96999
-[#97052]: https://github.com/cockroachdb/cockroach/pull/97052
-[#97378]: https://github.com/cockroachdb/cockroach/pull/97378
-[#97401]: https://github.com/cockroachdb/cockroach/pull/97401
-[#97440]: https://github.com/cockroachdb/cockroach/pull/97440
-[#97644]: https://github.com/cockroachdb/cockroach/pull/97644
-[#97663]: https://github.com/cockroachdb/cockroach/pull/97663
diff --git a/src/current/_includes/releases/v22.1/v22.1.17.md b/src/current/_includes/releases/v22.1/v22.1.17.md
deleted file mode 100644
index b0077724364..00000000000
--- a/src/current/_includes/releases/v22.1/v22.1.17.md
+++ /dev/null
@@ -1,5 +0,0 @@
-## v22.1.17
-
-Release Date: March 27, 2023
-
-{% include releases/release-downloads-docker-image.md release=include.release %}
\ No newline at end of file
diff --git a/src/current/_includes/releases/v22.1/v22.1.18.md b/src/current/_includes/releases/v22.1/v22.1.18.md
deleted file mode 100644
index 321501435b9..00000000000
--- a/src/current/_includes/releases/v22.1/v22.1.18.md
+++ /dev/null
@@ -1,90 +0,0 @@
-## v22.1.18
-
-Release Date: March 28, 2023
-
-{% include releases/release-downloads-docker-image.md release=include.release %}
-
-
Security updates
-
-- Previously, users could gain unauthorized access to [statement diagnostic bundles](https://www.cockroachlabs.com/docs/v22.1/ui-debug-pages#reports) they did not create if they requested the bundle through an HTTP request to `/_admin/v1/stmtbundle/` and correctly guessed its (non-secret) ID. This change locks down this endpoint behind the usual SQL gating that correctly uses the SQL user in the HTTP session as identified by their cookie. [#99055][#99055]
-- Ensure that no unsanitized URIs or secret keys get written to the [jobs table](https://www.cockroachlabs.com/docs/v22.1/show-jobs) if the [backup](https://www.cockroachlabs.com/docs/v22.1/backup) fails. [#99265][#99265]
-
-
SQL language changes
-
-- Increased the default value of [the `sql.stats.cleanup.rows_to_delete_per_txn` cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to 10k, to increase efficiency of the cleanup job for [SQL statistics](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#table-statistics). [#97722][#97722]
-- Added support for the syntax [`CREATE DATABASE IF NOT EXISTS ... WITH OWNER`](https://www.cockroachlabs.com/docs/v22.1/create-database). [#97976][#97976]
-- Added a new [session setting](https://www.cockroachlabs.com/docs/v22.1/set-vars#supported-variables), `optimizer_always_use_histograms`, which ensures that the optimizer always uses histograms when available to calculate the [statistics](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#table-statistics) of every plan that it explores. Enabling this setting can prevent the optimizer from choosing a suboptimal [index](https://www.cockroachlabs.com/docs/v22.1/indexes) when statistics for a table are stale. [#98229][#98229]
-- The [`SHOW DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v22.1/show-default-privileges) command now has a column that says if the default privilege will give [the grant option](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#supported-privileges) to the grantee. [#98012][#98012]
-- Added a new internal virtual table [`crdb_internal.node_memory_monitors`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal). It exposes all of the current reservations with the [memory accounting system](https://www.cockroachlabs.com/docs/v22.1/ui-runtime-dashboard#memory-usage) on a single node. Access to the table requires [`VIEWACTIVITY` or `VIEWACTIVITYREDACTED` permissions](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#supported-privileges). [#98043][#98043]
-- Fixed the help message on the [`UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update) to correctly position the optional `FROM` clause in the help output. [#99293][#99293]
-
-
Command-line changes
-
-- The `--drain-wait` argument to the [`cockroach node drain`](https://www.cockroachlabs.com/docs/v22.1/cockroach-node) command will be automatically increased if the command detects that it is smaller than the sum of the [cluster settings](https://www.cockroachlabs.com/docs/v22.1/node-shutdown#cluster-settings) `server.shutdown.drain_wait`, `server.shutdown.connection_wait`, `server.shutdown.query_wait` times two, and `server.shutdown.lease_transfer_wait`. If the `--drain-wait` argument is 0, then no timeout is used. This recommendation [was already documented](https://www.cockroachlabs.com/docs/v22.1/node-shutdown#drain-timeout), but now the advice will be applied automatically. [#98578][#98578]
-
-
-
-- Previously, the [`ALTER TABLE ... INJECT STATISTICS`](https://www.cockroachlabs.com/docs/v22.1/alter-table) command would fail if a column with the [`COLLATED STRING` type](https://www.cockroachlabs.com/docs/v22.1/collate) had histograms to be injected, and this is now fixed. The bug has been present since at least [v21.2]({% link releases/v21.2.md %}). [#97492][#97492]
-- Fixed a bug where CockroachDB could encounter an internal error `"no bytes in account to release ..."` in rare cases. The bug was introduced in [v22.1]({% link releases/v22.1.md %}). [#97774][#97774]
-- [Transaction](https://www.cockroachlabs.com/docs/v22.1/transactions) uncertainty intervals are correctly configured for reverse scans again, ensuring that reverse scans cannot serve [stale reads](https://www.cockroachlabs.com/docs/v22.1/architecture/transaction-layer#stale-reads) when clocks in a cluster are skewed. [#97519][#97519]
-- The owner of the [public schema](https://www.cockroachlabs.com/docs/v22.1/schema-design-overview#schemas) can now be changed. Use [`ALTER SCHEMA public OWNER TO new_owner`](https://www.cockroachlabs.com/docs/v22.1/alter-schema). [#98064][#98064]
-- Fixed a bug in which [common table expressions](https://www.cockroachlabs.com/docs/v22.1/common-table-expressions) (CTEs) marked as `WITH RECURSIVE` which were not actually recursive could return incorrect results. This could happen if the CTE used a `UNION ALL`, because the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) incorrectly converted the `UNION ALL` to a `UNION`. This bug had existed since support for recursive CTEs was first added in [v20.1]({% link releases/v20.1.md %}). [#98114][#98114]
-- Fixed a bug present since [v22.1]({% link releases/v22.1.md %}). When [rangefeed](https://www.cockroachlabs.com/docs/v22.1/create-and-configure-changefeeds#enable-rangefeeds) enablement overrides in span configs were introduced, rangefeed requests reached spans outside the [range](https://www.cockroachlabs.com/docs/v22.1/architecture/glossary#architecture-range), this did not cause range cache invalidation because the enablement settings were checked before determining if the span was within the range. Requests could repeatedly reach the same incorrect range, causing errors until cache invalidation or node restart. Now CockroachDB correctly checks that the span is within the range prior to checking the enablement settings, invalidating the cache when a request reaches an incorrect range and causing subsequent requests to successfully reach the correct range. [#97660][#97660]
-- Fixed a bug that could crash the CockroachDB process when a query contained a literal [tuple expression](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions#tuple-constructors) with more than two elements and only a single label, e.g., `((1, 2, 3) AS foo)`. [#98314][#98314]
-- Allow users with the `VIEWACTIVITY`/`VIEWACTIVITYREDACTED` [permissions](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#role-options) to access the [`crdb_internal.ranges_no_leases`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) table, necessary to view important DB Console pages (specifically, the [Databases Page](https://www.cockroachlabs.com/docs/v22.1/ui-databases-page), including database details, and database tables). [#98646][#98646]
-- Fixed a bug where using [`ST_Transform`](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#st_transform) could result in a memory leak. [#98835][#98835]
-- Fixed a bug that caused incorrect results when comparisons of [tuples](https://www.cockroachlabs.com/docs/v22.1/scalar-expressions#tuple-constructors) were done using the `ANY` [operator](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#operators). For example, an expression like (x, y) = ANY (SELECT a, b FROM t WHERE ...) could return `true` instead of the correct result of `NULL` when `x` and `y` were `NULL`, or `a` and `b` were `NULL`. This could only occur if the [subquery is correlated](https://www.cockroachlabs.com/docs/v22.1/subqueries.html#correlated-subqueries), i.e., it references columns from the outer part of the query. This bug was present since the [cost-based optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) was introduced in [v2.1]({% link releases/v2.1.md %}). [#99161][#99161]
-
-
-
-- Queries with invalid syntax are now logged at the `INFO` level on the [`SQL_EXEC` logging channel](https://www.cockroachlabs.com/docs/v22.1/sql-audit-logging). Previously, they were logged at the `ERROR` level. [#101090][#101090]
-
-
SQL language changes
-
-- Added the `prepared_statements_cache_size` [session setting](https://www.cockroachlabs.com/docs/v22.1/set-vars) that helps to prevent [prepared statement](https://www.cockroachlabs.com/docs/v22.1/savepoint#savepoints-and-prepared-statements) leaks by automatically deallocating the least-recently-used prepared statements when the cache reaches a given size. [#99264][#99264]
-
-
DB Console changes
-
-- New data is now auto-fetched every 5 minutes on the [**Statement and Transaction Fingerprints**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-fingerprint-page) pages. [#100702][#100702]
-
-
Bug fixes
-
-- Previously, [`ADD COLUMN ... DEFAULT cluster_logical_timestamp()`](https://www.cockroachlabs.com/docs/v22.1/alter-table) would crash the node and leave the table in a corrupt state. The root cause is a `nil` pointer dereference. The bug is now fixed by returning an unimplemented error and hence disallowing using the [builtin function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#array-functions) as the default value when backfilling. [#99682][#99682]
-- Fixed a bug where the stats columns on the [**Transaction Fingerprint Overview**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page#transaction-details-page) page was continuously incrementing. [#99405][#99405]
-- Fixed a bug that prevented the [garbage collection](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#garbage-collection) job for the [`TRUNCATE TABLE`](https://www.cockroachlabs.com/docs/v22.1/truncate) command from successfully finishing if the table descriptor had already been garbage collected. The garbage collection job now succeeds in this situation by handling the missing descriptor edge case. [#100146][#100146]
-- Fixed a bug present in v21.1 that would cause the SQL gateway node to crash if a [view was created](https://www.cockroachlabs.com/docs/v22.1/create-view) with circular or self-referencing dependencies. [#100165][#100165]
-- Fixed a bug in evaluation of `ANY`, `SOME`, and `ALL` [sub-operators](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#operators) that would cause expressions like `NULL = ANY(ARRAY[]::INT[])` to return `NULL` instead of `FALSE`. [#100363][#100363]
-- Fixed a bug that could prevent a cached query with a [user-defined type](https://www.cockroachlabs.com/docs/v22.1/create-type) reference from being invalidated even after a [schema change](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) that should prevent the type from being resolved. [#100358][#100358]
-- Fixed a bug existing before v22.1 that could cause a projected expression to replace column references with incorrect values. [#100368][#100368]
-- Fixed a bug where the physical disk space of some tables could not be calculated. [#100937][#100937]
-- Fixed a bug so that the [`crdb_internal.deserialize_session`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) function works properly with prepared statements that have more parameter type hints than parameters. [#101363][#101363]
-- Fixed a bug where in [PostgreSQL Extended Query protocol](https://www.postgresql.org/docs/10/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY) mode it was possible for auto-commits to not execute certain logic for DDL, when certain DML ([`INSERT`](https://www.cockroachlabs.com/docs/v22.1/insert)/[`UPDATE`](https://www.cockroachlabs.com/docs/v22.1/update)/[`DELETE`](https://www.cockroachlabs.com/docs/v22.1/delete)) and DDL were combined in an implicit transaction. [#101630][#101630]
-- In the [**DB Console SQL Activity**](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity) pages, issuing a new request for stats while one is pending is now allowed and will replace the pending request. [#100702][#100702]
-- Fixed a rare condition that could allow a [transaction](https://www.cockroachlabs.com/docs/v22.1/transactions) to get stuck indefinitely waiting on a released row-level [lock](https://www.cockroachlabs.com/docs/v22.1/architecture/transaction-layer#concurrency-control) if the per-range lock count limit was exceeded while the transaction was waiting on another lock. [#100944][#100944]
-- Fixed a bug where CockroachDB incorrectly evaluated [`EXPORT`](https://www.cockroachlabs.com/docs/v22.1/export) statements that had a projection or rendering on top of the `EXPORT`. (For example, `WITH CTE AS (EXPORT INTO CSV 'nodelocal://1/export1/' FROM SELECT * FROM t) SELECT filename FROM CTE;` would not work.) Previously, such statements would result in panics or incorrect query results. [#101808][#101808]
-
-
Performance improvements
-
-- Removed prettify usages that could cause out-of-memory (OOM) errors on the [**Statement Details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transaction Details**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) page. [#99452][#99452]
-
-
-
-
Contributors
-
-This release includes 38 merged PRs by 23 authors.
-
-
-
-- CSV is now a supported format for changefeeds. This only works with `initial_scan='only'` and does not work with diff/resolved options. [#82355][#82355]
-
-
SQL language changes
-
-- The `bulkio.ingest.sender_concurrency_limit` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) can be used to adjust the concurrency at which any one SQL node, across all operations that it is running (e.g., [`RESTORE`s](https://www.cockroachlabs.com/docs/v22.1/restore), [`IMPORT`s](https://www.cockroachlabs.com/docs/v22.1/import), and [schema changes](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes)), will send bulk ingest requests to the KV storage layer. [#81789][#81789]
-- The `sql.ttl.range_batch_size` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) is deprecated. [#82711][#82711]
-- The pgwire `DESCRIBE` command is now supported for use against a cursor created with the `DECLARE` command. This improves compatibility with PostgreSQL and is needed for compatibility with psycopg3 server-side cursors. [#82772][#82772]
-- Fixed an issue where `SHOW BACKUP with privileges` displayed grant statements with incorrect syntax (specifically, without the object type). As an example, previously displayed: `GRANT ALL ON status TO j4;` Now displayed: `GRANT ALL ON TYPE status TO j4;` [#82727][#82727]
-- Added the `spanconfig.kvsubscriber.update_behind_nanos` metric to track the latency between realtime and the last update handled by the `KVSubscriber`. This metric can be used to monitor the staleness of a node's view of reconciled `spanconfig` state. [#82895][#82895]
-
-
DB Console changes
-
-- The time picker component has been improved such that users can use keyboard input to select a time without having to type `" (UTC)"`. [#82495][#82495]
-- The time picker now opens directly to the custom time selection menu when a custom time is already selected. A "Preset Time Ranges" navigation link has been added to go to the preset options from the custom menu. [#82495][#82495]
-
-
Bug fixes
-
-- The output of [`SHOW CREATE VIEW`](https://www.cockroachlabs.com/docs/v22.1/show-create#show-the-create-view-statement-for-a-view) now properly includes the keyword `MATERIALIZED` for materialized views. [#82196][#82196]
-- Fixed the `identity_generation` column in the [`information_schema.columns`](https://www.cockroachlabs.com/docs/v22.1/information-schema#columns) table so its value is either `BY DEFAULT`, `ALWAYS`, or `NULL`. [#82184][#82184]
-- Disk write probes during node liveness heartbeats will no longer get stuck on stalled disks, instead returning an error once the operation times out. Additionally, disk probes now run in parallel on nodes with multiple stores. [#81476][#81476]
-- Fixed a bug where an unresponsive node (e.g., a node with a stalled disk) could prevent other nodes from acquiring its leases, effectively stalling these ranges until the node was shut down or recovered. [#81815][#81815]
-- Fixed a crash that could happen when preparing a statement with unknown placeholder types. [#82647][#82647]
-- Previously, when adding a column to a pre-existing table and adding a partial index referencing that column in the transaction, DML operations against the table while the schema change was ongoing would fail. Now these hazardous schema changes are not allowed. [#82668][#82668]
-- Fixed a bug where CockroachDB would sometimes automatically retry the `BEGIN` statement of an explicit transaction. [#82681][#82681]
-- Fixed a bug where draining/drained nodes could re-acquire leases during an import or an index backfill. [#80834][#80834]
-- Fixed a bug where the startup of an internal component after a server restart could result in the delayed application of zone configuration. [#82858][#82858]
-- Previously, using [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v22.1/as-of-system-time) of two different statements in the same line would result in an assertion error. This is now a PG error with code `0A000`. [#82654][#82654]
-- Fixed a bug where KV requests, in particular export requests issued during a [backup](https://www.cockroachlabs.com/docs/v22.1/backup), were rejected incorrectly causing the backup to fail with a `batch timestamp must be after replica GC threshold` error. The requests were rejected on the pretext that their timestamp was below the [garbage collection threshold](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#garbage-collection) of the key span. This was because the [protected timestamps](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#protected-timestamps) were not considered when computing the garbage collection threshold for the key span being backed up. Protected timestamp records hold up the garbage collection threshold of a span during long-running operations such as backups to prevent revisions from being garbage collected. [#82757][#82757]
-
-
-
-
Contributors
-
-This release includes 54 merged PRs by 31 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- likzn (first-time contributor)
-
-
-
-- Fixed a rare bug where [replica rebalancing](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer) during write heavy workloads could cause keys to be deleted unexpectedly from a [local store](https://www.cockroachlabs.com/docs/v22.1/cockroach-start#flags-store). [#102190][#102190]
-- Fixed a bug introduced in v22.1.19, v22.2.8, and pre-release versions of 23.1 that could cause queries to return spurious insufficient [privilege](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#privileges) errors. For the bug to occur, two databases would need to have duplicate tables, each with a [foreign key](https://www.cockroachlabs.com/docs/v22.1/foreign-key) reference to another table. The error would then occur if the same SQL string was executed against both databases concurrently by users that have privileges over only one of the tables. [#102653][#102653]
-- Fixed a bug where a [backup](https://www.cockroachlabs.com/docs/v22.1/backup-and-restore-overview) with a key's [revision history](https://www.cockroachlabs.com/docs/v22.1/take-backups-with-revision-history-and-restore-from-a-point-in-time) split across multiple [SST files](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#ssts) may not have correctly restored the proper revision of the key. [#102372][#102372]
-- Fixed a bug present since v21.1 that allowed values to be inserted into an [`ARRAY`](https://www.cockroachlabs.com/docs/v22.1/array)-type column that did not conform to the inner-type of the array. For example, it was possible to insert `ARRAY['foo']` into a column of type `CHAR(1)[]`. This could cause incorrect results when querying the table. The [`INSERT`](https://www.cockroachlabs.com/docs/v22.1/insert) now errors, which is expected. [#102811][#102811]
-- Fixed a bug where [backup and restore](https://www.cockroachlabs.com/docs/v22.1/backup-and-restore-overview) would panic if the target is a synthetic public [schema](https://www.cockroachlabs.com/docs/v22.1/schema-design-overview), such as `system.public`. [#102783][#102783]
-- Fixed an issue since v20.2.0 where running [`SHOW HISTOGRAM`](https://www.cockroachlabs.com/docs/v22.1/show-columns) to see the histogram for an [`ENUM`](https://www.cockroachlabs.com/docs/v22.1/enum)-type column would panic and crash the cockroach process. [#102829][#102829]
-
-
SQL language changes
-
-- Added two views to the [`crdb_internal`](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) catalog: `crdb_internal.statement_statistics_persisted`, which surfaces data in the persisted `system.statement_statistics` table, and `crdb_internal.transaction_statistics_persisted`, which surfaces the `system.transaction_statistics` table. [#99272][#99272]
-
-
-
-
Contributors
-
-This release includes 13 merged PRs by 14 authors.
-
-
-
-- If a page crashes, a force refresh is no longer required to be able to see the other pages on [DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview). [#103326][#103326]
-- On the [SQL Activity > Fingerprints](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#sql-statement-fingerprints) view, users will not see stats that have not yet been flushed to disk. [#103130][#103130]
-- Users can now request top-k stmts by % runtime on the [SQL Activity > Fingerprints](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity) view. [#103130][#103130]
-- Added Search Criteria to the [Statements](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages. [#103130][#103130]
-
-
Bug fixes
-
-- Fixed a bug in [closed timestamp](https://www.cockroachlabs.com/docs/v22.1/architecture/transaction-layer#closed-timestamps) updates within its side-transport. Previously, during [asymmetric network partitions](https://www.cockroachlabs.com/docs/v22.1/cluster-setup-troubleshooting#network-partition), a node that transfers a lease away, and misses a [liveness heartbeat](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#epoch-based-leases-table-data), could then erroneously update the closed timestamp during the stasis period of its liveness. This could lead to closed timestamp invariant violation, and node crashes. In extreme cases this could lead to inconsistencies in read-only queries. [#102597][#102597]
-- The value of `pg_constraint.conparentid` is now `0` rather than `NULL`. CockroachDB does not support constraints on [partitions](https://www.cockroachlabs.com/docs/v22.1/partitioning). [#103230][#103230]
-
-
-
-
Contributors
-
-This release includes 7 merged PRs by 6 authors.
-
-
-
-- Improved the error message that is logged when a changefeed is triggered by dropping the parent database or a type. The previous error was of the format `value type is not BYTES: UNKNOWN`. [#107943][#107943]
-
-
SQL language changes
-
-- When no data is persisted to SQL statistics tables, such as when no flush has occurred or when flushing is disabled, the endpoint's combined view is shown, which includes in-memory data. [#104052][#104052]
-
-
Bug fixes
-
-- Fixed a bug where `SHOW DEFAULT PRIVILEGES` returned no information for a database with uppercase or special characters in its name. [#103954][#103954]
-- Fixed a bug that would result in corruption of [encrypted data at rest on a cluster node]({% link v23.1/encryption.md %}). If a node with this corrupted state was restarted, the node could fail to rejoin the cluster. If multiple nodes encountered this bug at the same time during roll out, the cluster could lose [quorum]({% link v23.1/architecture/replication-layer.md %}#overview). For more information, refer to [Technical Advisory 106617](https://www.cockroachlabs.com/docs/advisories/a106617).[#104141][#104141]
-- Fixed a null-pointer exception introduced in v22.2.9 and v23.1.1 that could cause a node to crash when populating SQL Activity pages in the DB Console. [#104052][#104052]
-
-
-
-
Contributors
-
-This release includes 7 merged PRs by 9 authors.
-
-
-
-- Added the ability to provide short-lived OAuth 2.0 tokens as a form of short-lived credentials to Google Cloud Storage and KMS. The token can be passed to the GCS or KMS URI via the new `BEARER_TOKEN` parameter for "specified" authentication mode.
-
- Example GCS URI: `gs:///?AUTH=specified&BEARER_TOKEN=`
-
- Example KMS URI: `gs:///?AUTH=specified&BEARER_TOKEN=`
-
- There is no refresh mechanism associated with this token, so it is up to the user to ensure that its TTL is longer than the duration of the job or query that is using the token. The job or query may irrecoverably fail if one of its tokens expire before completion. [#83210][#83210]
-
-
SQL language changes
-
-- CockroachDB now sends the `Severity_Nonlocalized` field in the `pgwire` Notice Response. [#82939][#82939]
-- Updated the `pg_backend_pid()` [built-in function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#built-in-functions) to match the data in the query cancellation key created during session initialization. This function is just for compatibility, and it does not return a real process ID. [#83167][#83167]
-- The log fields for captured index usage statistics are no longer redacted [#83293][#83293]
-- CockroachDB now returns a message instructing users to run hash-sharded index creation statements from a pre-v22.1 node, or just wait until the upgrade is finalized, when the cluster is in a mixed state during a rolling upgrade. Previously, we simply threw a descriptor validation error. [#83556][#83556]
-- The [sampled query telemetry log](https://www.cockroachlabs.com/docs/v22.1/logging-overview#logging-destinations) now includes a plan gist field. The plan gist field provides a compact representation of a logical plan for the sampled query. The field is written as a base64-encoded string. [#83643][#83643]
-- The error code reported when trying to use a system or [virtual column](https://www.cockroachlabs.com/docs/v22.1/computed-columns) in the `STORING` clause of an `INDEX` has been changed from `XXUUU (internal error)` to `0A000 (feature not supported)`. [#83648][#83648]
-- [Foreign keys](https://www.cockroachlabs.com/docs/v22.1/foreign-key) can now reference the `crdb_region` column in [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview#regional-tables) even if `crdb_region` is not explicitly part of a `UNIQUE` constraint. This is possible since `crdb_region` is implicitly included in every index on `REGIONAL BY ROW` tables as the partitioning key. This applies to whichever column is used as the partitioning column, in case a different name is used with a `REGIONAL BY ROW AS...` statement. [#83815][#83815]
-
-
Operational changes
-
-- Disk stalls no longer prevent the CockroachDB process from crashing when `Fatal` errors are emitted. [#83127][#83127]
-- Added a new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `bulkio.backup.checkpoint_interval` which controls the minimum interval between writes of progress checkpoints to external storage. [#83266][#83266]
-- The application name associated with a SQL session is no longer considered redactable information. [#83553][#83553]
-
-
Command-line changes
-
-- The `cockroach demo` command now enables [rangefeeds](https://www.cockroachlabs.com/docs/v22.1/create-and-configure-changefeeds#enable-rangefeeds) by default. You can restore the previous behavior by starting the command with the `--auto-enable-rangefeeds=false` flag. [#83344][#83344]
-
-
DB Console changes
-
-- The DB Console has a more helpful error message when the [**Jobs** page](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) times out, and an information message appears after 2 seconds of loading and indicates that the loading might take a while. Previously, it would show the message `Promise timed out after 30000 ms`. [#82722][#82722]
-- The **Statement Details** page was renamed to **Statement Fingerprint**. The [**Statement Fingerprint**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-fingerprint-page) page now shows charts for: Execution and Planning Time, Rows Processed, Execution Retries, Execution Count, and Contention. [#82960][#82960]
-- The time interval component on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages has been added to the **Statement Fingerprint** **Overview** and [**Explain Plans**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#explain-plans) tabs, and the [**Transaction Details**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page#transaction-details-page) page. [#82721][#82721]
-- Added a confirmation modal to the `reset SQL Stats` button. [#83142][#83142]
-- Application names and database names are now sorted in the dropdown menus. [#83334][#83334]
-- A new single column called **Rows Processed**, displayed by default, combines the columns rows read and rows written on the **Statements** and **Transactions** pages. [#83511][#83511]
-- The time interval selected on the [**Metrics**](https://www.cockroachlabs.com/docs/v22.1/ui-overview#metrics) page and the [**SQL Activity**](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity) pages are now aligned. If the user changes the time interval on one page, the value will be the same for the other. [#83507][#83507]
-- Added a label to the **Statement**, **Statement Fingerprint**, and **Transaction** pages, with information about the time interval for which we're showing information. The **Execution Stats** tab was removed from the **Statement Fingerprint** page. [#83333][#83333]
-- Removed the 10 and 30 minute options on the **SQL Activity** page. [#83542][#83542]
-- On the **Statements** page, users can no longer filter statements by searching for text in the `EXPLAIN` plan. [#83652][#83652]
-- Updated the tooltips on the **Statements** and **Transactions** pages in the DB Console for improved user experience. [#83540][#83540]
-
-
Bug fixes
-
-- Fixed a bug where, in earlier v22.1 releases, added validation could cause problems for descriptors which carried invalid back references due to a previous bug in v21.1. This stricter validation could result in a variety of query failures. CockroachDB now weakens the validation to permit the corruption. A subsequent fix in v22.2 is scheduled that will repair the invalid reference. [#82859][#82859]
-- Added missing support for preparing a `DECLARE` cursor statement with placeholders. [#83001][#83001]
-- CockroachDB now treats node unavailable errors as retry-able [changefeed](https://www.cockroachlabs.com/docs/v22.1/change-data-capture-overview) errors. [#82874][#82874]
-- CockroachDB now ensures running changefeeds do not inhibit node shutdown. [#82874][#82874]
-- **Last Execution** time now shows the correct value on **Statement Fingerprint** page. [#83114][#83114]
-- CockroachDB now uses the proper multiplying factor to contention value on **Statement Details** page. [#82960][#82960]
-- CockroachDB now prevents disabling [TTL](https://www.cockroachlabs.com/docs/v22.1/row-level-ttl) with `ttl = 'off'` to avoid conflicting with other TTL settings. To disable TTL, use `RESET (ttl)`. [#83216][#83216]
-- Fixed a panic that could occur if the `inject_retry_errors_enabled` cluster setting is true and an `INSERT` is executed outside of an explicit transaction. [#83193][#83193]
-- Previously, a user could be connected to a database but unable to see the metadata for that database in [`pg_catalog`](https://www.cockroachlabs.com/docs/v22.1/pg-catalog) if the user did not have privileges for the database. Now, users can always see the `pg_catalog` metadata for a database they are connected to (see [#59875](https://github.com/cockroachdb/cockroach/issues/59875)). [#83360][#83360]
-- The **Statement Fingerprint** page now finds the stats when the `unset` application filter is selected. [#83334][#83334]
-- Fixed a bug where no validation was performed when adding a [virtual computed column](https://www.cockroachlabs.com/docs/v22.1/computed-columns) which was marked `NOT NULL`. This meant that it was possible to have a virtual computed column with an active `NOT NULL` constraint despite having rows in the table for which the column was `NULL`. [#83353][#83353]
-- Fixed the behavior of the [`soundex` function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#string-and-byte-functions) when passed certain Unicode inputs. Previously, certain Unicode inputs could result in crashes, errors, or incorrect output. [#83435][#83435]
-- Fixed a bug where a lock could be held for a long period of time when adding a new column to a table (or altering a column type). This contention could make the [**Jobs** page](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) non-responsive and job adoption slow. [#83306][#83306]
-- Fixed a bug where a panic could occur during server startup when restarting a node which is running a garbage collection job. [#83474][#83474]
-- The period selected on the **Metrics** page time picker is preserved when refreshing the page, and no longer changes to a custom period. [#83507][#83507]
-- Changefeeds no longer error out when attempting to checkpoint during intermediate pause-requested or cancel-requested states. [#83569][#83569]
-- CockroachDB now retries S3 operations when they error out with a read connection reset error instead of failing the top-level job. [#83581][#83581]
-- The **Statements** table for a transaction in the **Transaction Details** page now shows the correct number of statements for a transaction. [#83651][#83651]
-- Fixed a bug that prevented [partial indexes](https://www.cockroachlabs.com/docs/v22.1/partial-indexes) from being used in some query plans. For example, a partial index with a predicate `WHERE a IS NOT NULL` was not previously used if `a` was a `NOT NULL` column. [#83241][#83241]
-- Index joins now consider functional dependencies from their input when determining equivalent columns instead of returning an internal error. [#83549][#83549]
-- An error message that referred to a non-existent cluster setting now refers to the correct cluster setting: `bulkio.backup.deprecated_full_backup_with_subdir.enabled`. [#81976][#81976]
-- Previously, the `CREATE` statement for the [`crdb_internal.cluster_contended_keys` view](https://www.cockroachlabs.com/docs/v22.1/crdb-internal) was missing the `crdb_internal.table_indexes.descriptor_id = crdb_internal.cluster_contention_events.table_id` `JOIN` condition, resulting in the view having more rows than expected. Now, the view properly joins the `crdb_internal.cluster_contention_events` and `crdb_internal.table_indexes` tables with all necessary `JOIN` conditions. [#83523][#83523]
-- Fixed a bug where `ADD COLUMN` or `DROP COLUMN` statements with the legacy schema changer could fail on tables with large rows due to exceeding the Raft command maximum size. [#83816][#83816]
-
-
Performance improvements
-
-- This release significantly improves the performance of [`IMPORT` statements](https://www.cockroachlabs.com/docs/v22.1/import) when the source is producing data not sorted by the destination table's primary key, especially if the destination table has a very large primary key with lots of columns. [#82746][#82746]
-- [Decommissioning nodes](https://www.cockroachlabs.com/docs/v22.1/node-shutdown) is now substantially faster, particularly for small to moderately loaded nodes. [#82680][#82680]
-- Queries with filters containing tuples in `= ANY` expressions, such as `(a, b) = ANY(ARRAY[(1, 10), (2, 20)])`, are now index accelerated. [#83467][#83467]
-- Fixed a bug where it was possible to accrue [MVCC](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#mvcc) garbage for much longer than needed. [#82967][#82967]
-
-
-
-- Added access control checks to three [multi-region related built-in functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators#multi-region-functions). [#83986][#83986]
-
-
SQL language changes
-
-- `crdb_internal.validate_ttl_scheduled_jobs` and `crdb_internal.repair_ttl_table_scheduled_job` can now only be run by users with the [`admin` role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role). [#83972][#83972]
-- `txn_fingerprint_id` has been added to `crdb_internal.node_statement_statistics`. The type of the column is `NULL` or `STRING`. [#84020][#84020]
-- The [sampled query telemetry log](https://www.cockroachlabs.com/docs/v22.1/logging-overview#logging-destinations) now includes session, transaction, and statement IDs, as well as the database name of the query. [#84026][#84026]
-- `crdb_internal.compact_engine_spans` can now only be run by users with the [`admin` role](https://www.cockroachlabs.com/docs/v22.1/security-reference/authorization#admin-role). [#84095][#84095]
-
-
DB Console changes
-
-- Updated `User` column name to `User Name` and fixed `High-water Timestamp` column tooltip on the **Jobs** page. [#83914][#83914]
-- Added the ability to search for exact terms in order when wrapping a search in quotes. [#84113][#84113]
-
-
Bug fixes
-
-- A flush message sent during portal execution in the `pgwire` extended protocol no longer results in an error. [#83955][#83955]
-- Previously, [virtual computed columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) which were marked as `NOT NULL` could be added to new [secondary indexes](https://www.cockroachlabs.com/docs/v22.1/indexes). Now, attempts to add such columns to a secondary index will result in an error. Note that such invalid columns can still be added to tables. Work to resolve that bug is tracked in #[81675](https://github.com/cockroachdb/cockroach/issues/81675). [#83551][#83551]
-- Statement and transaction statistics are now properly recorded for implicit transactions with multiple statements. [#84020][#84020]
-- The `SessionTransactionReceived` session phase time is no longer recorded incorrectly (which caused large transaction times to appear in the Console) and has been renamed to `SessionTransactionStarted`. [#84030][#84030]
-- Fixed a rare issue where the failure to apply a [Pebble](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#pebble) manifest change (typically due to block device failure or unavailability) could result in an incorrect [LSM](https://www.cockroachlabs.com/docs/v22.1/architecture/storage-layer#log-structured-merge-trees) state. Such a state would likely result in a panic soon after the failed application. This change alters the behavior of Pebble to panic immediately in the case of a failure to apply a change to the manifest. [#83735][#83735]
-- Fixed a bug which could crash nodes when visiting the [DB Console Statements](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) page. This bug was present since version v21.2.0. [#83714][#83714]
-- Moved connection OK log and metric to same location after auth completes for consistency. This resolves an inconsistency (see linked issue) in the DB Console where the log and metric did not match. [#84103][#84103]
-- CockroachDB previously would not normalize `timestamp/timestamptz - timestamp/timestamptz` like PostgreSQL does in some cases (depending on the query). This is now fixed. [#83999][#83999]
-- Custom time period selection is now aligned between the [Metrics](https://www.cockroachlabs.com/docs/v22.1/ui-overview-dashboard) and [SQL Activity](https://www.cockroachlabs.com/docs/v22.1/ui-overview#sql-activity) pages in the DB Console. [#84184][#84184]
-- Fixed a critical bug (#[83687](https://github.com/cockroachdb/cockroach/issues/83687)) introduced in v22.1.0 where a failure to transfer a [lease](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#leases) in the joint config may result in range unavailability. The fix allows the original [leaseholder](https://www.cockroachlabs.com/docs/v22.1/architecture/replication-layer#leases) to reacquire the lease so that lease transfer can be retried. [#84145][#84145]
-- Fixed a minor bug that caused internal errors and poor index recommendations when running [`EXPLAIN`](https://www.cockroachlabs.com/docs/v22.1/explain) statements. [#84220][#84220]
-- Fixed a bug where [`ALTER TABLE ... SET LOCALITY REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v22.1/set-locality#set-the-table-locality-to-regional-by-row) could leave the region `ENUM` type descriptor unaware of a dependency on the altered table. This would, in turn, wrongly permit a `DROP REGION` to succeed, rendering the table unusable. Note that this fix does not help existing clusters which have already run such an `ALTER TABLE`; see #[84322](https://github.com/cockroachdb/cockroach/issues/84322) for more information on this case. [#84339][#84339]
-- Fixed a bug that could cause internal errors in rare cases when running queries with [`GROUP BY`](https://www.cockroachlabs.com/docs/v22.1/select-clause#create-aggregate-groups) clauses. [#84307][#84307]
-- Fixed a bug in transaction conflict resolution which could allow backups to wait on long-running transactions. [#83900][#83900]
-- Fixed an internal error `node ... with MaxCost added to the memo` that could occur during planning when calculating the cardinality of an outer join when one of the inputs had 0 rows. [#84377][#84377]
-
-
Known limitations
-
-- A performance regression exists for v22.1.4 and v22.1.5 that causes [DB Console Metrics pages](https://www.cockroachlabs.com/docs/v21.2/ui-overview-dashboard) to fail to load, or to load slower than expected, when attempting to display metrics graphs. This regression is fixed in CockroachDB v22.1.6. [#85636](https://github.com/cockroachdb/cockroach/issues/85636)
-
-
-
-- [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v22.1/as-of-system-time) now takes the time zone into account when converting to UTC. For example: `2022-01-01 08:00:00-04:00` is now treated the same as `2022-01-01 12:00:00` instead of being interpreted as `2022-01-01 08:00:00` [#84663][#84663]
-
-
DB Console changes
-
-- Updated labels from "date range" to "time interval" on time picker (custom option, preset title, previous and next arrows) [#84517][#84517]
-- Removed `View Statement Details` link inside the [**Session Details**](https://www.cockroachlabs.com/docs/v22.1/ui-sessions-page) page. [#84502][#84502]
-- Updated the message when there is no data on the selected time interval on the [**Statements**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v22.1/ui-transactions-page) pages. [#84623][#84623]
-
-
Bug fixes
-
-- Fixed a conversion on the jobs endpoint, so that the [**Jobs**](https://www.cockroachlabs.com/docs/v22.1/ui-jobs-page) page won't return a `500` error when a job contained an error with quotes. [#84464][#84464]
-- The 'Parse', 'Bind', and 'Execute' `pgwire` commands now return an error if they are used during an aborted transaction. [`COMMIT`](https://www.cockroachlabs.com/docs/v22.1/commit-transaction) and [`ROLLBACK`](https://www.cockroachlabs.com/docs/v22.1/rollback-transaction) statements are still allowed during an aborted transaction. [#84329][#84329]
-- Sorting on the plans table inside the [**Statement Details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-details-page) page is now properly working. [#84627][#84627]
-- Fixed a bug that could cause [unique indexes](https://www.cockroachlabs.com/docs/v22.1/unique) to be unexpectedly dropped after running an [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v22.1/alter-primary-key) statement, if the new PK column set is a subset of the old PK column set.[#84570][#84570]
-- Fixed a bug where some statements in a batch would not get executed if the following conditions were met:
- - A batch of statements is sent in a single string.
- - A [`BEGIN`](https://www.cockroachlabs.com/docs/v22.1/begin-transaction) statement appears in the middle of the batch.
- - The `enable_implicit_transaction_for_batch_statements` [session variable](https://www.cockroachlabs.com/docs/v22.1/set-vars) is set to `true`. (This defaults to false in v22.1)
- This bug was introduced in v22.1.2. [#84593][#84593]
-- Previously, CockroachDB could deadlock when evaluating analytical queries if multiple queries had to [spill to disk](https://www.cockroachlabs.com/docs/v22.1/vectorized-execution#disk-spilling-operations) at the same time. This is now fixed by making some of the queries error out instead. If you know that there is no deadlock and that some analytical queries that have spilled are just taking too long, blocking other queries from spilling, you can adjust newly introduced `sql.distsql.acquire_vec_fds.max_retries` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) (use `0` to enable the previous behavior of indefinite waiting until spilling resources open up). [#84657][#84657]
-- Fixes a bug where cluster restores of older backups would silently clobber system tables or fail to complete. [#84904][#84904]
-- Fixed a bug that was introduced in v21.2 that could cause increased memory usage when scanning a table with wide rows. [#83966][#83966]
-- Fixed a bug in the `concat` projection operator on arrays that gave output of nulls when the projection operator can actually handle null arguments and may result in a non-null output. [#84615][#84615]
-- Reduced foreground latency impact when performing changefeed backfills by adjusting `changefeed.memory.per_changefeed_limit` [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) to 128MiB (Enterprise only). [#84702][#84702]
-
-
Known limitations
-
-- A performance regression exists for v22.1.4 and v22.1.5 that causes [DB Console Metrics pages](https://www.cockroachlabs.com/docs/v21.2/ui-overview-dashboard) to fail to load, or to load slower than expected, when attempting to display metrics graphs. This regression is fixed in CockroachDB v22.1.6. [#85636](https://github.com/cockroachdb/cockroach/issues/85636)
-
-
-
-- [Client certificates](https://www.cockroachlabs.com/docs/v22.1/authentication#client-authentication) now have tenant scoping, which allows an operator to authenticate a client to a specific tenant. A tenant-scoped client certificate contains the client name in the CN and the tenant ID in the URIs section of the Subject Alternative Name (SAN) values. The format of the URI SAN is `crdb://tenant//user/` [#84371][#84371].
-- The HTTP endpoints under the `/api/v2` prefix will now accept cookie-based authentication similar to other HTTP endpoints used by the [DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview). The encoded session must be in a cookie named `"session"`, and the `"X-Cockroach-API-Session"` header is required to be set to `"cookie"` for the session to be read from the cookie header. A cookie provided without the custom header present will be ignored. [#85553][#85553]
-
-
General changes
-
-- Upgraded `cloud.google.com/go/storage` from v18.2.0 to v1.21.0 to allow for injection of custom retry logic in the [SDK](https://cloud.google.com/sdk). [#85763][#85763]
-
-
SQL language changes
-
-- Removed the `DatabaseID` field from the sampled query telemetry log due to its potential to cause indefinite blocking in the case of a lease acquisition failure. [#85026][#85026]
-- The structured payloads used for telemetry logs now include the following new fields:
-
- - `MaxFullScanRowsEstimate`: The maximum number of rows scanned by a full scan, as estimated by the optimizer.
- - `TotalScanRowsEstimate`: The total number of rows read by all scans in a query, as estimated by the optimizer.
- - `OutputRowsEstimate`: The number of rows output by a query, as estimated by the optimizer.
- - `StatsAvailable`: Whether table statistics were available to the optimizer when planning a query.
- - `NanosSinceStatsCollected`: The maximum number of nanoseconds that have passed since stats were collected on any table scanned by a query.
- - `BytesRead`: The number of bytes read from disk.
- - `RowsRead`: The number of rows read from disk.
- - `RowsWritten`: The number of rows written.
- - `InnerJoinCount`: The number of inner joins in the query plan.
- - `LeftOuterJoinCount`: The number of left (or right) outer joins in the query plan.
- - `FullOuterJoinCount`: The number of full outer joins in the query plan.
- - `SemiJoinCount`: The number of semi joins in the query plan.
- - `AntiJoinCount`: The number of anti joins in the query plan.
- - `IntersectAllJoinCount`: The number of intersect all joins in the query plan.
- - `ExceptAllJoinCount`: The number of except all joins in the query plan.
- - `HashJoinCount`: The number of hash joins in the query plan.
- - `CrossJoinCount`: The number of cross joins in the query plan.
- - `IndexJoinCount`: The number of index joins in the query plan.
- - `LookupJoinCount`: The number of lookup joins in the query plan.
- - `MergeJoinCount`: The number of merge joins in the query plan.
- - `InvertedJoinCount`: The number of inverted joins in the query plan.
- - `ApplyJoinCount`: The number of apply joins in the query plan.
- - `ZigZagJoinCount`: The number of zig zag joins in the query plan. [#85337][#85337] [#85743][#85743]
-
-
Operational changes
-
-- Telemetry logs will now display more finely redacted error messages from SQL execution. Previously, the entire error string was fully redacted. [#85403][#85403]
-
-
Command-line changes
-
-- The CLI now contains a flag (`--log-config-vars`) that allows for environment variables to be specified for expansion within the logging configuration file. This allows a single logging configuration file to service an array of sinks without further manipulation of the configuration file. [#85171][#85171]
-
-
API endpoint changes
-
-- A new `/api/v2/sql/` endpoint enables execution of simple SQL queries over HTTP. [#84374][#84374]
-
-
Bug fixes
-
-- Fixed an issue with incorrect start time position of selected time range on the [Metrics page](https://www.cockroachlabs.com/docs/v22.1/ui-overview#metrics). [#85835][#85835]
-- Fixed an issue where the [`information_schema`](https://www.cockroachlabs.com/docs/v22.1/information-schema) and [`SHOW GRANTS`](https://www.cockroachlabs.com/docs/v22.1/show-grants) command did not report that object owners have permission to `GRANT` privileges on that object. [#84918][#84918]
-- Fixed an issue where imports and rebalances were being slowed down due to the accumulation of empty directories from range snapshot applications. [#84223][#84223]
-- The v22.1 upgrade migration `21.2-56: populate RangeAppliedState.RaftAppliedIndexTerm for all ranges` is now more resilient to failures. This migration must be applied across all ranges and replicas in the system, and can fail with `operation "wait for Migrate application" timed out` if any replicas are temporarily unavailable, which is increasingly likely to happen in large clusters with many ranges. Previously, this would restart the migration from the start. [#84909][#84909]
-- Fixed a bug where using `CREATE SCHEDULE` in a mixed version cluster could prevent the scheduled job from actually running because of incorrectly writing a lock file. [#84372][#84372]
-- Previously, [restoring from backups](https://www.cockroachlabs.com/docs/v22.1/backup-and-restore-overview) on mixed-version clusters that had not yet upgraded to v22.1 could fail with `cannot use bulkio.restore_at_current_time.enabled until version MVCCAddSSTable`. Restores now fall back to the v21.2 behavior instead of erroring in this scenario. [#84641][#84641]
-- Fixed incorrect error handling that could cause casts to OID types to fail in some cases. [#85124][#85124]
-- Fixed a bug where the privileges for an object owner would not be correctly transferred when the owner was changed. [#85083][#85083]
-- The `crdb_internal.deserialize_session` built-in function no longer causes an error when handling an empty prepared statement. [#85122][#85122]
-- Fixed a bug introduced in v20.2 that could cause a panic when an expression contained a geospatial comparison like `~` that was negated. [#84630][#84630]
-- Fixed a bug where new leaseholders with a `VOTER_INCOMING` type would not always be detected properly during query execution, leading to occasional increased tail latencies due to unnecessary internal retries. [#85315][#85315]
-- Fixed a bug introduced in v22.1.0 that could cause the optimizer to not use auto-commit for some mutations in multi-region clusters when it should have done so. [#85434][#85434]
-- Fixed a bug introduced in v22.1.0 that could cause the optimizer to reject valid bounded staleness queries with the error `unimplemented: cannot use bounded staleness for DISTRIBUTE`. [#85434][#85434]
-- Previously, concatenating a UUID with a string would not use the normal string representation of the UUID values. This is now fixed so that, for example, `'eb64afe6-ade7-40ce-8352-4bb5eec39075'::UUID || 'foo'` returns `eb64afe6-ade7-40ce-8352-4bb5eec39075foo` rather than the encoded representation. [#85416][#85416]
-- Fixed a bug where CockroachDB could run into an error when a query included a limited reverse scan and some rows needed to be retrieved by `GET` requests. [#85584][#85584]
-- Fixed a bug where the SQL execution HTTP endpoint did not properly support queries with multiple result values. [#84374][#84374]
-- Fixed a bug where clients could sometimes receive errors due to lease acquisition timeouts of the form `operation "storage.pendingLeaseRequest: requesting lease" timed out after 6s`. [#85428][#85428]
-- The [**Statement details**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) page now renders properly for statements where the hex representation of the `fingerprint_id` is less than 16 digits. [#85529][#85529]
-- Fixed a bug that could cause union queries to return incorrect results in rare cases. [#85654][#85654]
-- Fixed a bug that could cause upgrades to fail if there was a table with a computed column that used a cast from [`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v22.1/timestamp) to [`STRING`](https://www.cockroachlabs.com/docs/v22.1/string). [#85779][#85779]
-- Fixed a bug that could cause a panic in rare cases when the unnest() [function](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) was used with a tuple return type. [#85349][#85349]
-- Fixed an issue where the `NO_INDEX_JOIN` hint could be ignored by the optimizer in some cases, causing it to create a query plan with an index join. [#85917][#85917]
-- Fixed a bug where changefeed jobs undergoing catch-up scans could fail with the error `expected provisional value for intent with ts X, found Y`. [#86117][#86117]
-- Previously, an empty column in the input to `COPY ... FROM CSV` would be treated as an empty string. Now, this is treated as `NULL`. The quoted empty string can still be used to input an empty string. Similarly, if a different `NULL` token is specified in the command options, it can be quoted in order to be treated as the equivalent string value. [#85926][#85926]
-- Fixed a bug where attempting to select data from a table that had different partitioning columns used for the primary and secondary indexes could cause an error. This occurred if the primary index had zone configurations applied to the index partitions with different regions for different partitions, and the secondary index had a different column type than the primary index for its partitioning column(s). [#86218][#86218]
-
-
Performance improvements
-
-- Previously, if there was sudden increase in the volume of pending MVCC GC work, there was an impact on foreground latencies. These sudden increases commonly occurred when:
-
- - `gc.ttlseconds` was reduced dramatically over tables/indexes that accrue a lot of MVCC garbage,
- - a paused backup job from more than one day ago was canceled or failed, or
- - a backup job that started more than one day ago just finished.
-
-An indicator of a large increase in the volume of pending MVCC GC work is a steep climb in the **GC Queue** graph on the **Metrics** page of the [DB Console](https://www.cockroachlabs.com/docs/v22.1/ui-overview). With this fix, the effect on foreground latencies as a result of this sudden build is reduced. [#85899][#85899]
-
-
-
-- The new `kv.rangefeed.range_stuck_threshold` (default `0`, i.e., disabled) [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) instructs rangefeed clients (used internally by [changefeeds](https://www.cockroachlabs.com/docs/v22.1/create-and-configure-changefeeds)) to restart automatically if no checkpoint or other event has been received from the server for some time. This is a defense-in-depth mechanism which will log output as follows if triggered: `restarting stuck rangefeed: waiting for r100 (n1,s1):1 [threshold 1m]: rangefeed restarting due to inactivity`. [#87253][#87253]
-
-
SQL language changes
-
-- Added a new [cluster setting](https://www.cockroachlabs.com/docs/v22.1/cluster-settings) `sql.stats.response.show_internal` (default: `false`) that can be set to `true` to display information about internal statistics on the [SQL Activity page](https://www.cockroachlabs.com/docs/v22.1/ui-sql-dashboard), with fingerprint option. [#86869][#86869]
-- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v22.1/explain-analyze) output now contains a warning when the estimated row count for scans is inaccurate. It includes a hint to collect the table [statistics](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#table-statistics) manually. [#86871][#86871]
-- CockroachDB allows mismatched type numbers in `PREPARE` statements. [#87161][#87161]
-- Decreased the cardinality of the number on `__moreN__` when replacing literals. [#87269][#87269]
-- The structured payloads used for telemetry logs now include the new `Regions` field which indicates the [regions](https://www.cockroachlabs.com/docs/v22.1/multiregion-overview#database-regions) of the nodes where SQL processing ran for the query. [#87466][#87466]
-- Added the schema name to [index](https://www.cockroachlabs.com/docs/v22.1/indexes) usage statistics telemetry. [#87624][#87624]
-- Added a creation timestamp to [index](https://www.cockroachlabs.com/docs/v22.1/indexes) usage statistics telemetry. [#87624][#87624]
-
-
Command-line changes
-
-- The `\c` metacommand in the [`cockroach sql`](https://www.cockroachlabs.com/docs/v22.1/cockroach-sql) shell no longer shows the password in plaintext. [#87548][#87548]
-
-
DB Console changes
-
-- The plan table on the **Explain Plans** tab of the [Statement Details](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page) page now displays the plan gist instead of plan ID. Also added the plan gist as the first line on the actual **Explain Plans** display. [#86872][#86872]
-- Added new **Last Execution Time** column to the statements table on the [**Statements** page](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page). This column is hidden by default. [#87077][#87077]
-- Added **Transaction Fingerprint ID** and **Statement Fingerprint ID** columns to the corresponding [**SQL Activity**](https://www.cockroachlabs.com/docs/v22.1/ui-sql-dashboard) overview pages. These columns are hidden by default. [#87100][#87100]
-- Properly formatted the **Execution Count** under the [**Statement Fingerprints**](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-fingerprint-page) page. Increased the timeout for the **Statement Fingerprints** page so it shows a proper timeout error when it happens, no longer crashing the page. [#87209][#87209]
-
-
Bug fixes
-
-- Fixed a vulnerability in the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) that could cause a panic in rare cases when planning complex queries with [`ORDER BY`](https://www.cockroachlabs.com/docs/v22.1/order-by). [#86804][#86804]
-- Previously, queries with many [joins](https://www.cockroachlabs.com/docs/v22.1/joins) and projections of multi-column expressions (e.g., `col1 + col2`), either present in the query or within a [virtual column](https://www.cockroachlabs.com/docs/v22.1/computed-columns) definition, could experience very long optimization times or hangs, where the query is never sent for execution. This is now fixed by adding the `disable_hoist_projection_in_join_limitation` [session flag](https://www.cockroachlabs.com/docs/v22.1/set-vars#supported-variables). [#85871][#85871]
-- Fixed a crash/panic that could occur if placeholder arguments were used with the `with_min_timestamp` or `with_max_staleness` [functions](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators). [#86881][#86881]
-- Fixed a crash that could happen when formatting queries that have placeholder `BitArray` arguments. [#86885][#86885]
-- CockroachDB now more precisely respects the `distsql_workmem` [setting](https://www.cockroachlabs.com/docs/v22.1/set-vars#supported-variables) which improves the stability of each node and makes OOMs less likely. [#86916][#86916]
-- Previously, escaping a double quote (`"`) with [`COPY`](https://www.cockroachlabs.com/docs/v22.1/copy-from) in `CSV` mode could ignore all subsequent lines in the same `COPY` if an `ESCAPE` clause were specified. This is now resolved. [#86977][#86977]
-- Fixed a bug that caused some special characters to be misread if they were being read by [`COPY ... FROM`](https://www.cockroachlabs.com/docs/v22.1/copy-from) into a `TEXT[]` column. [#86887][#86887]
-- Timescale object is now properly constructed from session storage, preventing bugs and crashes in pages that use the timescale object when reloading the page. [#86975][#86975]
-- Previously, CockroachDB would return an internal error when evaluating the `json_build_object` [built-in](https://www.cockroachlabs.com/docs/v22.1/functions-and-operators) when an [enum](https://www.cockroachlabs.com/docs/v22.1/enum) or a void datum were passed as the first argument. This is now fixed. [#86851][#86851]
-- The statement tag for the [`SHOW`](https://www.cockroachlabs.com/docs/v22.1/show-vars) command results in the pgwire protocol no longer containing the number of returned rows. [#87126][#87126]
-- Fixed a bug where the options given to the [`BEGIN TRANSACTION`](https://www.cockroachlabs.com/docs/v22.1/begin-transaction) command would be ignored if the `BEGIN` was a prepared statement. [#87126][#87126]
-- Fixed a bug that caused internal errors like "unable to [vectorize](https://www.cockroachlabs.com/docs/v22.1/vectorized-execution) execution plan: unhandled expression type" in rare cases. [#87182][#87182]
-- The **Explain Plans** tab inside the [Statement Fingerprints](https://www.cockroachlabs.com/docs/v22.1/ui-statements-page#statement-fingerprint-page) page now groups plans that have the same shape but a different number of spans in corresponding scans. [#87211][#87211]
-- A bug in the column backfiller, which is used to add or remove columns from tables, failed to account for the need to read [virtual columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns) which were part of a [primary key](https://www.cockroachlabs.com/docs/v22.1/primary-key). [Hash-sharded](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) [indexes](https://www.cockroachlabs.com/docs/v22.1/indexes), starting in v22.1, use [virtual columns](https://www.cockroachlabs.com/docs/v22.1/computed-columns). Any [hash-sharded](https://www.cockroachlabs.com/docs/v22.1/hash-sharded-indexes) table created in v22.1 or any table created with a virtual column as part of its primary key would indefinitely fail to complete a [schema change](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) which adds or removes columns. This bug has been fixed. [#87272][#87272]
-- Added a missing memory accounting call when appending a KV to the underlying `kvBuf`. [#87118][#87118]
-- Some [upgrade](https://www.cockroachlabs.com/docs/v22.1/upgrade-cockroach-version) migrations perform [schema changes](https://www.cockroachlabs.com/docs/v22.1/online-schema-changes) on system tables. Those upgrades which added [indexes](https://www.cockroachlabs.com/docs/v22.1/indexes) could, in some cases, get caught retrying because they failed to detect that the migration had already occurred due to the existence of a populated field. When that happens, the finalization of the new version could hang indefinitely and require manual intervention. This bug has been fixed. [#87633][#87633]
-- Fixed a bug that led to the `querySummary` field in the `crdb_internal.statements_statistics`' metadata column being empty. [#87618][#87618]
-- Previously, the `querySummary` metadata field in the `crdb_internal.statement_statistics` table was inconsistent with the query metadata field for executed prepared statements. These fields are now consistent for prepared statements. [#87618][#87618]
-- Fixed a rare bug where errors could occur related to the use of [arrays](https://www.cockroachlabs.com/docs/v22.1/array) of [enums](https://www.cockroachlabs.com/docs/v22.1/enum). [#85961][#85961]
-- Fixed a bug that would result in a failed cluster [restore](https://www.cockroachlabs.com/docs/v22.1/restore). [#87764][#87764]
-- Fixed a misused query optimization involving tables with one or more [`PARTITION BY`](https://www.cockroachlabs.com/docs/v22.1/partition-by) clauses and partition [zone constraints](https://www.cockroachlabs.com/docs/v22.1/configure-replication-zones) which assign [region locality](https://www.cockroachlabs.com/docs/v22.1/set-locality) to those partitions. In some cases the [optimizer](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer) picks a ['locality-optimized search'](https://www.cockroachlabs.com/docs/v22.1/cost-based-optimizer#locality-optimized-search-in-multi-region-clusters) query plan which is not truly locality-optimized, and has higher latency than competing query plans which use distributed scan. Locality-optimized search is now avoided in cases which are known not to benefit from this optimization. [#87848][#87848]
-
-
Performance improvements
-
-- Planning time has been reduced for queries over tables with a large number of columns and/or [indexes](https://www.cockroachlabs.com/docs/v22.1/indexes). [#86749][#86749]
-- Long-running SQL sessions are now less likely to maintain large allocations for long periods of time, which decreases the risk of OOM and improves memory utilization. [#86797][#86797]
-
-
Build changes
-
-- Fixed OSS builds that did not have CCL-licensed UI intermediates lingering on-disk. [#86425][#86425]
-
-
-
-- For pgwire-level prepared statements, CockroachDB now supports the case where the number of the type hints is greater than the number of placeholders in a given query. [#88145][#88145]
-- The index of a placeholder is now replaced to always be `$1` to limit fingerprint creations. [#88364][#88364]
-- Changed the default value of `sql.metrics.statement_details.plan_collection.enabled` to `false`, as this information is no longer used. [#88420][#88420]
-
-
Operational changes
-
-- Reduced the length of the `raft.process.handleready.latency` metric help text to avoid it being rejected by certain Prometheus services. [#88147][#88147]
-
-
DB Console changes
-
-- Changed the column name from `Users` to `User Name` on the **Databases** > **Tables** page, when viewing Grants. [#87857][#87857]
-- Fixed the index and grant sorting on the **Databases** page to have a default column, and to update the URL to match the selected item. [#87862][#87862]
-- Added "Application Name" to the **SQL Activity** > **Statements**, **Transaction Overview** (and their respective column selectors), and **Transaction Details** pages, and updated the label from "App" to "Application Name" on the **Statement Details** page. [#87874][#87874]
-- On the **SQL Activity** "Session Details" page, the "Most Recent Statement" column now shows the last active query instead of "No Active Statement". [#88055][#88055]
-
-
Bug fixes
-
-- Previously, an active replication report update could prevent a node from shutting down until it completed. Now, the report update is cancelled on node shutdown instead. [#87924][#87924]
-- Fixed a bug with [`LOOKUP`](https://www.cockroachlabs.com/docs/v22.2/joins#lookup-joins) join selectivity estimation when using [hash-sharded indexes](https://www.cockroachlabs.com/docs/v22.2/hash-sharded-indexes), which could cause `LOOKUP` joins to be selected by the optimizer in cases where other join methods are less expensive. [#87390][#87390]
-- Fixed incorrect results from queries which utilize [locality](https://www.cockroachlabs.com/docs/v22.2/cockroach-start#locality)-optimized search on the inverted index of a table with `REGIONAL BY ROW` partitioning. [#88113][#88113]
-- The `current_setting` [built-in function](https://www.cockroachlabs.com/docs/v22.2/functions-and-operators) no longer results in an error when checking a custom [session setting](https://www.cockroachlabs.com/docs/v22.2/set-vars) that does not exist when the `missing_ok` argument is set to `true`. [#88161][#88161]
-- When a CockroachDB node is being [drained](https://www.cockroachlabs.com/docs/v22.2/node-shutdown#drain-a-node-manually), all queries that are still running on that node are now forcefully canceled after waiting for the specified `server.shutdown.query_wait` period if the newly-added cluster setting `sql.distsql.drain.cancel_after_wait.enabled` is set to `true` (it is `false` by default). [#88150][#88150]
-- Previously, CockroachDB could incorrectly fail to fetch rows with `NULL` values when reading from the unique secondary index when multiple [column families](https://www.cockroachlabs.com/docs/v22.2/column-families) are defined for the table and the index doesn't store some of the `NOT NULL` columns. [#88209][#88209]
-- CockroachDB now more promptly reacts to query cancellations (e.g., due to statement timeout being exceeded) after the query [spills to disk](https://www.cockroachlabs.com/docs/v22.2/vectorized-execution#disk-spilling-operations). [#88394][#88394]
-- Fixed a bug existing since before v21.1 that could cause an internal error when executing a query with `LIMIT` ordering on the output of a [window function](https://www.cockroachlabs.com/docs/v22.2/window-functions). [#87746][#87746]
-- CockroachDB no longer fetches unnecessary rows for queries with specified `LIMIT`s. The bug was introduced in v22.1.7. [#88421][#88421]
-- Prometheus histograms were incorrectly omitting buckets whose cumulative count matched the preceding bucket. This would lead to erroneous results when operating on histogram sums. [#88331][#88331]
-- Completed [statement diagnostics bundles](https://www.cockroachlabs.com/docs/v22.2/explain-analyze#debug-option) now persist in the DB Console, and can been seen on the **Statement Diagnostics History** page, under **Advanced Debug**. [#88390][#88390]
-- Dropping temporary tables and sequences now properly checks a user's privileges. [#88360][#88360]
-- The pgwire `DESCRIBE` step no longer fails with an error while attempting to look up cursors declared with names containing special characters. [#88413][#88413]
-- Fixed a bug in [`BACKUP`](https://www.cockroachlabs.com/docs/v22.2/backup) where spans for views were being backed up. Because ranges are not split at view boundaries, this can cause the backup to send export requests to ranges that do not belong to any backup target. [#86681][#86681]
-- Fixed a bug where if telemetry is enabled, [`COPY`](https://www.cockroachlabs.com/docs/v22.2/copy-from) could sometimes cause the server to crash. [#88325][#88325]
-- Fixed a rare internal error that could occur during planning when a predicate included values close to the maximum or minimum `int64` value. The error, `estimated row count must be non-zero`, is now fixed. [#88533][#88533]
-- Adjusted sending and receiving Raft queue sizes to match. Previously the receiver could unnecessarily drop messages in situations when the sending queue is bigger than the receiving one. [#88448][#88448]
-
-
-
-- The following types of data are now considered "safe" for reporting from within `debug.zip`:
-
- - Range start/end keys, which can include data from any indexed SQL column.
- - Key spans, which can include data from any indexed SQL column.
- - Usernames and role names.
- - SQL object names (including names of databases, schemas, tables, sequences, views, types, and UDFs.
-
- [#88739][#88739]
-
-
SQL language changes
-
-- The new cluster setting `sql.metrics.statement_details.gateway_node.enabled` controls whether the gateway node ID is persisted to the `system.statement_statistics` table as-is or as a `0` to decrease cardinality on the table. The node ID is still available on the statistics column. [#88634][#88634]
-
-
Operational changes
-
-- The new cluster setting `kv.mvcc_gc.queue_interval` controls how long the MVCC garbage collection queue waits between processing replicas. The previous value of `1s` is the new default. A large volume of MVCC garbage collection work can disrupt foreground traffic. [#89430][#89430]
-
-
Command-line changes
-
-- The new `--redact` flag of the `debug zip` command triggers redaction of all sensitive data in debug zip bundles, except for range keys. The `--redact-logs` flag will be deprecated in v22.2. [#88739][#88739]
-
-
Bug fixes
-
-- Fixed a bug introduced in v22.1.7 that could cause an internal panic when a query ordering contained redundant ordering columns. [#88480][#88480]
-- Fixed a bug that could cause nodes to crash when executing apply-joins in query plans. [#88513][#88513]
-- Fixed a bug introduced in v21.2.0 that could cause errors when executing queries with correlated `WITH` expressions. [#88513][#88513]
-- Fixed a longstanding bug that could cause the optimizer to produce an incorrect plan when aggregate functions `st_makeline` or `st_extent` were called with invalid-type and empty inputs respectively. [#88952][#88952]
-- Fixed unintended recordings of index reads caused by internal executor/queries. [#88943][#88943]
-- Fixed a bug with capturing index usage statistics for database names with hyphens [#88999][#88999]
-- Fixed a bug that caused incorrect evaluation of expressions in the form `col +/- const1 ? const2`, where `const1` and `const2` are constant values and `?` is any comparison operator. The bug was caused by operator overflow when the optimizer attempted to simplify these expressions to have a single constant value. [#88970][#88970]
-- Fixed a bug where the `system.replication_constraint_stats` table did not show erroneous voter constraint violations when `num_voters` was configured. [#88662][#88662]
-- Fixed a bug that caused incorrect results from the floor division operator, `//`, when the numerator was non-constant and the denominator was the constant 1. [#89263][#89263]
-- Fixed a bug introduced in v2.1.0 that could cause queries containing a subquery with an `EXCEPT` clause to produce incorrect results. This could happen if the optimizer could guarantee that the left side of the `EXCEPT` clause always returned more rows than the right side. In this case, the optimizer made an incorrect assumption that the `EXCEPT` subquery always returned at least one row, which could cause the optimizer to perform an invalid transformation, leading to the potential for incorrect results in the full query result. [#89134][#89134]
-- Fixed a bug that prevented saving of a statement bundle that was collected for a query that resulted in a `statement_timeout` error. [#89126][#89126]
-- Fixed a longstanding bug that could cause a panic when running a query with an `EXPLAIN` clause that attempts to order on a non-output column. [#88686][#88686]
-- Fixed a bug introduced in v22.1.0 that could cause incorrect results in a narrow circumstance:
A query with ORDER BY and LIMIT is executed.
The table that contains the ORDER BY columns has an index containing that contains those columns.
The index contains a prefix of columns held to a fixed number of values by the query filter, such as WHERE a IN (1, 3).
A CHECK constraint (such as CHECK (a IN (1, 3))) is inferred by either:
A computed column expression (such as WHERE a IN (1, 3) and a column b INT AS (a + 10) STORED).
A PARTITION BY clause (such as INDEX (a, ...) PARTITION BY LIST (a) (PARTITION p VALUES ((1), (3)))).
[#89281][#89281]
-- The WAL is now flushed when writing storage checkpoints on consistency checker failures [#89402][#89402]
-- Fixed a bug that could cause a restore operation to fail with a spurious error. [#89443][#89443]
-- Fixed a bug that caused changefeeds to be permanently in a "failed to send RPC" state. [#87804][#87804]
-- Improved optimizer selectivity and cost estimates of zigzag joins to prevent query plans from using the optimizer when many rows are qualified). [#89460][#89460]
-- A `VOTER_DEMOTING_LEARNER` can acquire the lease in a joint configuration only when there is a `VOTOR_INCOMING` in the configuration and the `VOTER_DEMOTING_LEARNER` was the last leaseholder. This prevents a situation in which the system is unable to exit the joint configuration. [#89611][#89611]
-- Fixed a bug introduced in v22.1.0 that cause CockroachDB to crash that could occur when dropping a role that owned two schemas with the same name in different databases. [#89538][#89538]
-- Fixed a bug in `pg_catalog` tables which could result in an internal error if a schema is concurrently dropped. [#88600][#88600]
-- Refined a check conducted during restore that ensures that all previously-offline tables were properly introduced. [#89688][#89688]
-- Fixed a bug in v22.1.0 to v22.1.8 that could cause a query with ORDER BY and LIMIT clauses to return incorrect results if it scanned a multi-column index containing the `ORDER BY` columns, and a prefix of the index columns was held fixed to two or more constant values by the query filter or schema. [#88488][88488]
-
-
Performance improvements
-
-- HTTP requests with `Accept-encoding: gzip` previously resulted in valid gzip-encoded but uncompressed responses. This resulted in inefficient HTTP transfer times. Those responses are now properly compressed, resulting in smaller network responses. [#89513][#89513]
-
-
Miscellaneous
-
-- The SQL proxy now validates the tenant certificate's common name and organization, in addition to its DNS name. The DNS name for a Kubernetes pod is the pod's IP address, and IP addresses are reused by the cluster. [#89677][#89677]
-
-- Reverted a fix for a bug that caused histograms to incorrectly omit buckets whose cumulative count matched the preceding bucket. The fix led to a significant increase in memory usage on clusters with Prometheus or OpenTelemetry collector instances. [#89532][#89532]
-
-
-- Fixed a rare internal error in the [optimizer](../v22.1/cost-based-optimizer.html), which could occur while enforcing orderings between SQL operators. This error has existed since before v22.1. [#113640][#113640]
+- Fixed a rare internal error in the optimizer, which could occur while enforcing orderings between SQL operators. This error has existed since before v22.1. [#113640][#113640]
- Fixed a bug where CockroachDB could incorrectly evaluate [lookup](../v23.2/joins.html#lookup-joins) and index [joins](../v23.2/joins.html) into tables with at least three [column families](../v23.2/column-families.html). This would result in either the `non-nullable column with no value` internal error, or the query would return incorrect results. This bug was introduced in v22.2. [#113694][#113694]
-
-
-{% include {{ page.version.version }}/prod-deployment/insecure-flag.md %}
-
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/app/cc-free-tier-params.md b/src/current/_includes/v22.1/app/cc-free-tier-params.md
deleted file mode 100644
index f8a196cdd8e..00000000000
--- a/src/current/_includes/v22.1/app/cc-free-tier-params.md
+++ /dev/null
@@ -1,10 +0,0 @@
-Where:
-
-- `{username}` and `{password}` specify the SQL username and password that you created earlier.
-- `{globalhost}` is the name of the CockroachDB {{ site.data.products.cloud }} free tier host (e.g., `free-tier.gcp-us-central1.cockroachlabs.cloud`).
-- `{path to the CA certificate}` is the path to the `cc-ca.crt` file that you downloaded from the CockroachDB {{ site.data.products.cloud }} Console.
-- `{cluster_name}` is the name of your cluster.
-
-{{site.data.alerts.callout_info}}
-If you are using the connection string that you [copied from the **Connection info** modal](#set-up-your-cluster-connection), your username, password, hostname, and cluster name will be pre-populated.
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/app/create-a-database.md b/src/current/_includes/v22.1/app/create-a-database.md
deleted file mode 100644
index 468eb93a57f..00000000000
--- a/src/current/_includes/v22.1/app/create-a-database.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
-1. In the SQL shell, create the `bank` database that your application will use:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
-1. Create a SQL user for your app:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE USER WITH PASSWORD ;
- ~~~
-
- Take note of the username and password. You will use it in your application code later.
-
-1. Give the user the necessary permissions:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > GRANT ALL ON DATABASE bank TO ;
- ~~~
-
-
-
-
-
-1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html).
-1. Start the [built-in SQL shell](cockroach-sql.html) using the connection string you got from the CockroachDB {{ site.data.products.cloud }} Console:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --url=''
- ~~~
-
-1. In the SQL shell, create the `bank` database that your application will use:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
-1. Exit the SQL shell:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-
-
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/app/create-maxroach-user-and-bank-database.md b/src/current/_includes/v22.1/app/create-maxroach-user-and-bank-database.md
deleted file mode 100644
index 1e259b96012..00000000000
--- a/src/current/_includes/v22.1/app/create-maxroach-user-and-bank-database.md
+++ /dev/null
@@ -1,32 +0,0 @@
-Start the [built-in SQL shell](cockroach-sql.html):
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs
-~~~
-
-In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE USER IF NOT EXISTS maxroach;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE DATABASE bank;
-~~~
-
-Give the `maxroach` user the necessary permissions:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> GRANT ALL ON DATABASE bank TO maxroach;
-~~~
-
-Exit the SQL shell:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
diff --git a/src/current/_includes/v22.1/app/for-a-complete-example-go.md b/src/current/_includes/v22.1/app/for-a-complete-example-go.md
deleted file mode 100644
index 64803f686a9..00000000000
--- a/src/current/_includes/v22.1/app/for-a-complete-example-go.md
+++ /dev/null
@@ -1,4 +0,0 @@
-For complete examples, see:
-
-- [Build a Go App with CockroachDB](build-a-go-app-with-cockroachdb.html) (pgx)
-- [Build a Go App with CockroachDB and GORM](build-a-go-app-with-cockroachdb.html)
diff --git a/src/current/_includes/v22.1/app/for-a-complete-example-java.md b/src/current/_includes/v22.1/app/for-a-complete-example-java.md
deleted file mode 100644
index b4c63135ae0..00000000000
--- a/src/current/_includes/v22.1/app/for-a-complete-example-java.md
+++ /dev/null
@@ -1,4 +0,0 @@
-For complete examples, see:
-
-- [Build a Java App with CockroachDB](build-a-java-app-with-cockroachdb.html) (JDBC)
-- [Build a Java App with CockroachDB and Hibernate](build-a-java-app-with-cockroachdb-hibernate.html)
diff --git a/src/current/_includes/v22.1/app/for-a-complete-example-python.md b/src/current/_includes/v22.1/app/for-a-complete-example-python.md
deleted file mode 100644
index 5b5d4bec3e9..00000000000
--- a/src/current/_includes/v22.1/app/for-a-complete-example-python.md
+++ /dev/null
@@ -1,6 +0,0 @@
-For complete examples, see:
-
-- [Build a Python App with CockroachDB](build-a-python-app-with-cockroachdb-psycopg3.html) (psycopg3)
-- [Build a Python App with CockroachDB](build-a-python-app-with-cockroachdb.html) (psycopg2)
-- [Build a Python App with CockroachDB and SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html)
-- [Build a Python App with CockroachDB and Django](build-a-python-app-with-cockroachdb-django.html)
diff --git a/src/current/_includes/v22.1/app/hibernate-dialects-note.md b/src/current/_includes/v22.1/app/hibernate-dialects-note.md
deleted file mode 100644
index 85f217abd3c..00000000000
--- a/src/current/_includes/v22.1/app/hibernate-dialects-note.md
+++ /dev/null
@@ -1,5 +0,0 @@
-Versions of the Hibernate CockroachDB dialect correspond to the version of CockroachDB installed on your machine. For example, `org.hibernate.dialect.CockroachDB201Dialect` corresponds to CockroachDB v20.1 and later, and `org.hibernate.dialect.CockroachDB192Dialect` corresponds to CockroachDB v19.2 and later.
-
-All dialect versions are forward-compatible (e.g., CockroachDB v20.1 is compatible with `CockroachDB192Dialect`), as long as your application is not affected by any backward-incompatible changes listed in your CockroachDB version's [release notes](../releases/index.html). In the event of a CockroachDB version upgrade, using a previous version of the CockroachDB dialect will not break an application, but, to enable all features available in your version of CockroachDB, we recommend keeping the dialect version in sync with the installed version of CockroachDB.
-
-Not all versions of CockroachDB have a corresponding dialect yet. Use the dialect number that is closest to your installed version of CockroachDB. For example, use `CockroachDB201Dialect` when using CockroachDB v21.1 and later.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/app/insecure/create-maxroach-user-and-bank-database.md b/src/current/_includes/v22.1/app/insecure/create-maxroach-user-and-bank-database.md
deleted file mode 100644
index 0fff36e7545..00000000000
--- a/src/current/_includes/v22.1/app/insecure/create-maxroach-user-and-bank-database.md
+++ /dev/null
@@ -1,32 +0,0 @@
-Start the [built-in SQL shell](cockroach-sql.html):
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure
-~~~
-
-In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE USER IF NOT EXISTS maxroach;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE DATABASE bank;
-~~~
-
-Give the `maxroach` user the necessary permissions:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> GRANT ALL ON DATABASE bank TO maxroach;
-~~~
-
-Exit the SQL shell:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
diff --git a/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/Sample.java b/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/Sample.java
deleted file mode 100644
index d1a54a8ddd2..00000000000
--- a/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/Sample.java
+++ /dev/null
@@ -1,215 +0,0 @@
-package com.cockroachlabs;
-
-import com.cockroachlabs.example.jooq.db.Tables;
-import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord;
-import org.jooq.DSLContext;
-import org.jooq.SQLDialect;
-import org.jooq.Source;
-import org.jooq.conf.RenderQuotedNames;
-import org.jooq.conf.Settings;
-import org.jooq.exception.DataAccessException;
-import org.jooq.impl.DSL;
-
-import java.io.InputStream;
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.SQLException;
-import java.util.*;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.atomic.AtomicLong;
-import java.util.function.Function;
-
-import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS;
-
-public class Sample {
-
- private static final Random RAND = new Random();
- private static final boolean FORCE_RETRY = false;
- private static final String RETRY_SQL_STATE = "40001";
- private static final int MAX_ATTEMPT_COUNT = 6;
-
- private static Function addAccounts() {
- return ctx -> {
- long rv = 0;
-
- ctx.delete(ACCOUNTS).execute();
- ctx.batchInsert(
- new AccountsRecord(1L, 1000L),
- new AccountsRecord(2L, 250L),
- new AccountsRecord(3L, 314159L)
- ).execute();
-
- rv = 1;
- System.out.printf("APP: addAccounts() --> %d\n", rv);
- return rv;
- };
- }
-
- private static Function transferFunds(long fromId, long toId, long amount) {
- return ctx -> {
- long rv = 0;
-
- AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId));
- AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId));
-
- if (!(amount > fromAccount.getBalance())) {
- fromAccount.setBalance(fromAccount.getBalance() - amount);
- toAccount.setBalance(toAccount.getBalance() + amount);
-
- ctx.batchUpdate(fromAccount, toAccount).execute();
- rv = amount;
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv);
- }
-
- return rv;
- };
- }
-
- // Test our retry handling logic if FORCE_RETRY is true. This
- // method is only used to test the retry logic. It is not
- // intended for production code.
- private static Function forceRetryLogic() {
- return ctx -> {
- long rv = -1;
- try {
- System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n");
- ctx.execute("SELECT crdb_internal.force_retry('1s')");
- } catch (DataAccessException e) {
- System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n");
- throw e;
- }
- return rv;
- };
- }
-
- private static Function getAccountBalance(long id) {
- return ctx -> {
- AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id));
- long balance = account.getBalance();
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance);
- return balance;
- };
- }
-
- // Run SQL code in a way that automatically handles the
- // transaction retry logic so we do not have to duplicate it in
- // various places.
- private static long runTransaction(DSLContext session, Function fn) {
- AtomicLong rv = new AtomicLong(0L);
- AtomicInteger attemptCount = new AtomicInteger(0);
-
- while (attemptCount.get() < MAX_ATTEMPT_COUNT) {
- attemptCount.incrementAndGet();
-
- if (attemptCount.get() > 1) {
- System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get());
- }
-
- if (session.connectionResult(connection -> {
- connection.setAutoCommit(false);
- System.out.printf("APP: BEGIN;\n");
-
- if (attemptCount.get() == MAX_ATTEMPT_COUNT) {
- String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT);
- throw new RuntimeException(err);
- }
-
- // This block is only used to test the retry logic.
- // It is not necessary in production code. See also
- // the method 'testRetryLogic()'.
- if (FORCE_RETRY) {
- session.fetch("SELECT now()");
- }
-
- try {
- rv.set(fn.apply(session));
- if (rv.get() != -1) {
- connection.commit();
- System.out.printf("APP: COMMIT;\n");
- return true;
- }
- } catch (DataAccessException | SQLException e) {
- String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState();
-
- if (RETRY_SQL_STATE.equals(sqlState)) {
- // Since this is a transaction retry error, we
- // roll back the transaction and sleep a little
- // before trying again. Each time through the
- // loop we sleep for a little longer than the last
- // time (A.K.A. exponential backoff).
- System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get());
- System.out.printf("APP: ROLLBACK;\n");
- connection.rollback();
- int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100);
- System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis);
- try {
- Thread.sleep(sleepMillis);
- } catch (InterruptedException ignored) {
- // no-op
- }
- rv.set(-1L);
- } else {
- throw e;
- }
- }
-
- return false;
- })) {
- break;
- }
- }
-
- return rv.get();
- }
-
- public static void main(String[] args) throws Exception {
- try (Connection connection = DriverManager.getConnection(
- "jdbc:postgresql://localhost:26257/bank?sslmode=disable",
- "maxroach",
- ""
- )) {
- DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings()
- .withExecuteLogging(true)
- .withRenderQuotedNames(RenderQuotedNames.NEVER));
-
- // Initialise database with db.sql script
- try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) {
- ctx.parser().parse(Source.of(in).readString()).executeBatch();
- }
-
- long fromAccountId = 1;
- long toAccountId = 2;
- long transferAmount = 100;
-
- if (FORCE_RETRY) {
- System.out.printf("APP: About to test retry logic in 'runTransaction'\n");
- runTransaction(ctx, forceRetryLogic());
- } else {
-
- runTransaction(ctx, addAccounts());
- long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId));
- long toBalance = runTransaction(ctx, getAccountBalance(toAccountId));
- if (fromBalance != -1 && toBalance != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance);
- }
-
- // Transfer $100 from account 1 to account 2
- long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount));
- if (transferResult != -1) {
- // Success!
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult);
-
- long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId));
- long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId));
- if (fromBalanceAfter != -1 && toBalanceAfter != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter);
- }
- }
- }
- }
- }
-}
diff --git a/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip b/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip
deleted file mode 100644
index f11f86b8f43..00000000000
Binary files a/src/current/_includes/v22.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip and /dev/null differ
diff --git a/src/current/_includes/v22.1/app/insecure/upperdb-basic-sample/main.go b/src/current/_includes/v22.1/app/insecure/upperdb-basic-sample/main.go
deleted file mode 100644
index 5c855356d7e..00000000000
--- a/src/current/_includes/v22.1/app/insecure/upperdb-basic-sample/main.go
+++ /dev/null
@@ -1,185 +0,0 @@
-package main
-
-import (
- "fmt"
- "log"
- "time"
-
- "github.com/upper/db/v4"
- "github.com/upper/db/v4/adapter/cockroachdb"
-)
-
-// The settings variable stores connection details.
-var settings = cockroachdb.ConnectionURL{
- Host: "localhost",
- Database: "bank",
- User: "maxroach",
- Options: map[string]string{
- // Insecure node.
- "sslmode": "disable",
- },
-}
-
-// Accounts is a handy way to represent a collection.
-func Accounts(sess db.Session) db.Store {
- return sess.Collection("accounts")
-}
-
-// Account is used to represent a single record in the "accounts" table.
-type Account struct {
- ID uint64 `db:"id,omitempty"`
- Balance int64 `db:"balance"`
-}
-
-// Collection is required in order to create a relation between the Account
-// struct and the "accounts" table.
-func (a *Account) Store(sess db.Session) db.Store {
- return Accounts(sess)
-}
-
-// createTables creates all the tables that are neccessary to run this example.
-func createTables(sess db.Session) error {
- _, err := sess.SQL().Exec(`
- CREATE TABLE IF NOT EXISTS accounts (
- ID SERIAL PRIMARY KEY,
- balance INT
- )
- `)
- if err != nil {
- return err
- }
- return nil
-}
-
-// crdbForceRetry can be used to simulate a transaction error and
-// demonstrate upper/db's ability to retry the transaction automatically.
-//
-// By default, upper/db will retry the transaction five times, if you want
-// to modify this number use: sess.SetMaxTransactionRetries(n).
-//
-// This is only used for demonstration purposes and not intended
-// for production code.
-func crdbForceRetry(sess db.Session) error {
- var err error
-
- // The first statement in a transaction can be retried transparently on the
- // server, so we need to add a placeholder statement so that our
- // force_retry() statement isn't the first one.
- _, err = sess.SQL().Exec(`SELECT 1`)
- if err != nil {
- return err
- }
-
- // If force_retry is called during the specified interval from the beginning
- // of the transaction it returns a retryable error. If not, 0 is returned
- // instead of an error.
- _, err = sess.SQL().Exec(`SELECT crdb_internal.force_retry('1s'::INTERVAL)`)
- if err != nil {
- return err
- }
-
- return nil
-}
-
-func main() {
- // Connect to the local CockroachDB node.
- sess, err := cockroachdb.Open(settings)
- if err != nil {
- log.Fatal("cockroachdb.Open: ", err)
- }
- defer sess.Close()
-
- // Adjust this number to fit your specific needs (set to 5, by default)
- // sess.SetMaxTransactionRetries(10)
-
- // Create the "accounts" table
- createTables(sess)
-
- // Delete all the previous items in the "accounts" table.
- err = Accounts(sess).Truncate()
- if err != nil {
- log.Fatal("Truncate: ", err)
- }
-
- // Create a new account with a balance of 1000.
- account1 := Account{Balance: 1000}
- err = Accounts(sess).InsertReturning(&account1)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Create a new account with a balance of 250.
- account2 := Account{Balance: 250}
- err = Accounts(sess).InsertReturning(&account2)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Printing records
- printRecords(sess)
-
- // Change the balance of the first account.
- account1.Balance = 500
- err = sess.Save(&account1)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Change the balance of the second account.
- account2.Balance = 999
- err = sess.Save(&account2)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Printing records
- printRecords(sess)
-
- // Delete the first record.
- err = sess.Delete(&account1)
- if err != nil {
- log.Fatal("Delete: ", err)
- }
-
- startTime := time.Now()
-
- // Add a couple of new records within a transaction.
- err = sess.Tx(func(tx db.Session) error {
- var err error
-
- if err = tx.Save(&Account{Balance: 887}); err != nil {
- return err
- }
-
- if time.Now().Sub(startTime) < time.Second*1 {
- // Will fail continuously for 2 seconds.
- if err = crdbForceRetry(tx); err != nil {
- return err
- }
- }
-
- if err = tx.Save(&Account{Balance: 342}); err != nil {
- return err
- }
-
- return nil
- })
- if err != nil {
- log.Fatal("Could not commit transaction: ", err)
- }
-
- // Printing records
- printRecords(sess)
-}
-
-func printRecords(sess db.Session) {
- accounts := []Account{}
- err := Accounts(sess).Find().All(&accounts)
- if err != nil {
- log.Fatal("Find: ", err)
- }
- log.Printf("Balances:")
- for i := range accounts {
- fmt.Printf("\taccounts[%d]: %d\n", accounts[i].ID, accounts[i].Balance)
- }
-}
diff --git a/src/current/_includes/v22.1/app/java-tls-note.md b/src/current/_includes/v22.1/app/java-tls-note.md
deleted file mode 100644
index a1fd6f61600..00000000000
--- a/src/current/_includes/v22.1/app/java-tls-note.md
+++ /dev/null
@@ -1,13 +0,0 @@
-{{site.data.alerts.callout_danger}}
-CockroachDB supports TLS 1.2 and 1.3, and uses 1.3 by default.
-
-[A bug in the TLS 1.3 implementation](https://bugs.openjdk.java.net/browse/JDK-8236039) in Java 11 versions lower than 11.0.7 and Java 13 versions lower than 13.0.3 makes the versions incompatible with CockroachDB.
-
-If an incompatible version is used, the client may throw the following exception:
-
-`javax.net.ssl.SSLHandshakeException: extension (5) should not be presented in certificate_request`
-
-For applications running Java 11 or 13, make sure that you have version 11.0.7 or higher, or 13.0.3 or higher.
-
-If you cannot upgrade to a version higher than 11.0.7 or 13.0.3, you must configure the application to use TLS 1.2. For example, when starting your app, use: `$ java -Djdk.tls.client.protocols=TLSv1.2 appName`
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/app/java-version-note.md b/src/current/_includes/v22.1/app/java-version-note.md
deleted file mode 100644
index 3d559314262..00000000000
--- a/src/current/_includes/v22.1/app/java-version-note.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-We recommend using Java versions 8+ with CockroachDB.
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/app/jooq-basic-sample/Sample.java b/src/current/_includes/v22.1/app/jooq-basic-sample/Sample.java
deleted file mode 100644
index fd71726603e..00000000000
--- a/src/current/_includes/v22.1/app/jooq-basic-sample/Sample.java
+++ /dev/null
@@ -1,215 +0,0 @@
-package com.cockroachlabs;
-
-import com.cockroachlabs.example.jooq.db.Tables;
-import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord;
-import org.jooq.DSLContext;
-import org.jooq.SQLDialect;
-import org.jooq.Source;
-import org.jooq.conf.RenderQuotedNames;
-import org.jooq.conf.Settings;
-import org.jooq.exception.DataAccessException;
-import org.jooq.impl.DSL;
-
-import java.io.InputStream;
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.SQLException;
-import java.util.*;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.atomic.AtomicLong;
-import java.util.function.Function;
-
-import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS;
-
-public class Sample {
-
- private static final Random RAND = new Random();
- private static final boolean FORCE_RETRY = false;
- private static final String RETRY_SQL_STATE = "40001";
- private static final int MAX_ATTEMPT_COUNT = 6;
-
- private static Function addAccounts() {
- return ctx -> {
- long rv = 0;
-
- ctx.delete(ACCOUNTS).execute();
- ctx.batchInsert(
- new AccountsRecord(1L, 1000L),
- new AccountsRecord(2L, 250L),
- new AccountsRecord(3L, 314159L)
- ).execute();
-
- rv = 1;
- System.out.printf("APP: addAccounts() --> %d\n", rv);
- return rv;
- };
- }
-
- private static Function transferFunds(long fromId, long toId, long amount) {
- return ctx -> {
- long rv = 0;
-
- AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId));
- AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId));
-
- if (!(amount > fromAccount.getBalance())) {
- fromAccount.setBalance(fromAccount.getBalance() - amount);
- toAccount.setBalance(toAccount.getBalance() + amount);
-
- ctx.batchUpdate(fromAccount, toAccount).execute();
- rv = amount;
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv);
- }
-
- return rv;
- };
- }
-
- // Test our retry handling logic if FORCE_RETRY is true. This
- // method is only used to test the retry logic. It is not
- // intended for production code.
- private static Function forceRetryLogic() {
- return ctx -> {
- long rv = -1;
- try {
- System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n");
- ctx.execute("SELECT crdb_internal.force_retry('1s')");
- } catch (DataAccessException e) {
- System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n");
- throw e;
- }
- return rv;
- };
- }
-
- private static Function getAccountBalance(long id) {
- return ctx -> {
- AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id));
- long balance = account.getBalance();
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance);
- return balance;
- };
- }
-
- // Run SQL code in a way that automatically handles the
- // transaction retry logic so we do not have to duplicate it in
- // various places.
- private static long runTransaction(DSLContext session, Function fn) {
- AtomicLong rv = new AtomicLong(0L);
- AtomicInteger attemptCount = new AtomicInteger(0);
-
- while (attemptCount.get() < MAX_ATTEMPT_COUNT) {
- attemptCount.incrementAndGet();
-
- if (attemptCount.get() > 1) {
- System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get());
- }
-
- if (session.connectionResult(connection -> {
- connection.setAutoCommit(false);
- System.out.printf("APP: BEGIN;\n");
-
- if (attemptCount.get() == MAX_ATTEMPT_COUNT) {
- String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT);
- throw new RuntimeException(err);
- }
-
- // This block is only used to test the retry logic.
- // It is not necessary in production code. See also
- // the method 'testRetryLogic()'.
- if (FORCE_RETRY) {
- session.fetch("SELECT now()");
- }
-
- try {
- rv.set(fn.apply(session));
- if (rv.get() != -1) {
- connection.commit();
- System.out.printf("APP: COMMIT;\n");
- return true;
- }
- } catch (DataAccessException | SQLException e) {
- String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState();
-
- if (RETRY_SQL_STATE.equals(sqlState)) {
- // Since this is a transaction retry error, we
- // roll back the transaction and sleep a little
- // before trying again. Each time through the
- // loop we sleep for a little longer than the last
- // time (A.K.A. exponential backoff).
- System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get());
- System.out.printf("APP: ROLLBACK;\n");
- connection.rollback();
- int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100);
- System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis);
- try {
- Thread.sleep(sleepMillis);
- } catch (InterruptedException ignored) {
- // no-op
- }
- rv.set(-1L);
- } else {
- throw e;
- }
- }
-
- return false;
- })) {
- break;
- }
- }
-
- return rv.get();
- }
-
- public static void main(String[] args) throws Exception {
- try (Connection connection = DriverManager.getConnection(
- "jdbc:postgresql://localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key.pk8&sslcert=certs/client.maxroach.crt",
- "maxroach",
- ""
- )) {
- DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings()
- .withExecuteLogging(true)
- .withRenderQuotedNames(RenderQuotedNames.NEVER));
-
- // Initialise database with db.sql script
- try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) {
- ctx.parser().parse(Source.of(in).readString()).executeBatch();
- }
-
- long fromAccountId = 1;
- long toAccountId = 2;
- long transferAmount = 100;
-
- if (FORCE_RETRY) {
- System.out.printf("APP: About to test retry logic in 'runTransaction'\n");
- runTransaction(ctx, forceRetryLogic());
- } else {
-
- runTransaction(ctx, addAccounts());
- long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId));
- long toBalance = runTransaction(ctx, getAccountBalance(toAccountId));
- if (fromBalance != -1 && toBalance != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance);
- }
-
- // Transfer $100 from account 1 to account 2
- long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount));
- if (transferResult != -1) {
- // Success!
- System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult);
-
- long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId));
- long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId));
- if (fromBalanceAfter != -1 && toBalanceAfter != -1) {
- // Success!
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter);
- System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter);
- }
- }
- }
- }
- }
-}
diff --git a/src/current/_includes/v22.1/app/jooq-basic-sample/jooq-basic-sample.zip b/src/current/_includes/v22.1/app/jooq-basic-sample/jooq-basic-sample.zip
deleted file mode 100644
index 859305478c0..00000000000
Binary files a/src/current/_includes/v22.1/app/jooq-basic-sample/jooq-basic-sample.zip and /dev/null differ
diff --git a/src/current/_includes/v22.1/app/pkcs8-gen.md b/src/current/_includes/v22.1/app/pkcs8-gen.md
deleted file mode 100644
index 411d262e970..00000000000
--- a/src/current/_includes/v22.1/app/pkcs8-gen.md
+++ /dev/null
@@ -1,8 +0,0 @@
-You can pass the [`--also-generate-pkcs8-key` flag](cockroach-cert.html#flag-pkcs8) to [`cockroach cert`](cockroach-cert.html) to generate a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. For example, if you have the user `max`:
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client max --certs-dir=certs --ca-key=my-safe-directory/ca.key --also-generate-pkcs8-key
-~~~
-
-The generated PKCS8 key will be named `client.max.key.pk8`.
diff --git a/src/current/_includes/v22.1/app/python/sqlalchemy/sqlalchemy-large-txns.py b/src/current/_includes/v22.1/app/python/sqlalchemy/sqlalchemy-large-txns.py
deleted file mode 100644
index 7a6ef82c2e3..00000000000
--- a/src/current/_includes/v22.1/app/python/sqlalchemy/sqlalchemy-large-txns.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from sqlalchemy import create_engine, Column, Float, Integer
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy.orm import sessionmaker
-from cockroachdb.sqlalchemy import run_transaction
-from random import random
-
-Base = declarative_base()
-
-# The code below assumes you have run the following SQL statements.
-
-# CREATE DATABASE pointstore;
-
-# USE pointstore;
-
-# CREATE TABLE points (
-# id INT PRIMARY KEY DEFAULT unique_rowid(),
-# x FLOAT NOT NULL,
-# y FLOAT NOT NULL,
-# z FLOAT NOT NULL
-# );
-
-engine = create_engine(
- # For cockroach demo:
- 'cockroachdb://:@:/bank?sslmode=require',
- echo=True # Log SQL queries to stdout
-)
-
-
-class Point(Base):
- __tablename__ = 'points'
- id = Column(Integer, primary_key=True)
- x = Column(Float)
- y = Column(Float)
- z = Column(Float)
-
-
-def add_points(num_points):
- chunk_size = 1000 # Tune this based on object sizes.
-
- def add_points_helper(sess, chunk, num_points):
- points = []
- for i in range(chunk, min(chunk + chunk_size, num_points)):
- points.append(
- Point(x=random()*1024, y=random()*1024, z=random()*1024)
- )
- sess.bulk_save_objects(points)
-
- for chunk in range(0, num_points, chunk_size):
- run_transaction(
- sessionmaker(bind=engine),
- lambda s: add_points_helper(
- s, chunk, min(chunk + chunk_size, num_points)
- )
- )
-
-
-add_points(10000)
diff --git a/src/current/_includes/v22.1/app/retry-errors.md b/src/current/_includes/v22.1/app/retry-errors.md
deleted file mode 100644
index 3a20939e97c..00000000000
--- a/src/current/_includes/v22.1/app/retry-errors.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-Your application should [use a retry loop to handle transaction errors](error-handling-and-troubleshooting.html#transaction-retry-errors) that can occur under [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/app/see-also-links.md b/src/current/_includes/v22.1/app/see-also-links.md
deleted file mode 100644
index ee55292e744..00000000000
--- a/src/current/_includes/v22.1/app/see-also-links.md
+++ /dev/null
@@ -1,9 +0,0 @@
-You might also be interested in the following pages:
-
-- [Client Connection Parameters](connection-parameters.html)
-- [Connection Pooling](connection-pooling.html)
-- [Data Replication](demo-replication-and-rebalancing.html)
-- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
-- [Replication & Rebalancing](demo-replication-and-rebalancing.html)
-- [Cross-Cloud Migration](demo-automatic-cloud-migration.html)
-- [Automated Operations](orchestrate-a-local-cluster-with-kubernetes-insecure.html)
diff --git a/src/current/_includes/v22.1/app/start-cockroachdb.md b/src/current/_includes/v22.1/app/start-cockroachdb.md
deleted file mode 100644
index a3348e2c4da..00000000000
--- a/src/current/_includes/v22.1/app/start-cockroachdb.md
+++ /dev/null
@@ -1,58 +0,0 @@
-Choose whether to run a temporary local cluster or a free CockroachDB cluster on CockroachDB {{ site.data.products.serverless }}. The instructions below will adjust accordingly.
-
-
-
-
-
-
-
-
-### Create a free cluster
-
-{% include cockroachcloud/quickstart/create-a-free-cluster.md %}
-
-### Set up your cluster connection
-
-The **Connection info** dialog shows information about how to connect to your cluster.
-
-1. Click the **Choose your OS** dropdown, and select the operating system of your local machine.
-
-1. Click the **Connection string** tab in the **Connection info** dialog.
-
-1. Open a new terminal on your local machine, and run the command provided in step **1** to download the CA certificate. This certificate is required by some clients connecting to CockroachDB {{ site.data.products.cloud }}.
-
-1. Copy the connection string provided in step **2** to a secure location.
-
- {{site.data.alerts.callout_info}}
- The connection string is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`.
- {{site.data.alerts.end}}
-
-
-
-
-
-1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html).
-1. Run the [`cockroach demo`](cockroach-demo.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach demo \
- --no-example-database
- ~~~
-
- This starts a temporary, in-memory cluster and opens an interactive SQL shell to the cluster. Any changes to the database will not persist after the cluster is stopped.
-
- {{site.data.alerts.callout_info}}
- If `cockroach demo` fails due to SSL authentication, make sure you have cleared any previously downloaded CA certificates from the directory `~/.postgresql`.
- {{site.data.alerts.end}}
-
-1. Take note of the `(sql)` connection string in the SQL shell welcome text:
-
- ~~~
- # Connection parameters:
- # (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo
- # (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require
- # (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257
- ~~~
-
-
diff --git a/src/current/_includes/v22.1/app/upperdb-basic-sample/main.go b/src/current/_includes/v22.1/app/upperdb-basic-sample/main.go
deleted file mode 100644
index 3e838fe43e2..00000000000
--- a/src/current/_includes/v22.1/app/upperdb-basic-sample/main.go
+++ /dev/null
@@ -1,187 +0,0 @@
-package main
-
-import (
- "fmt"
- "log"
- "time"
-
- "github.com/upper/db/v4"
- "github.com/upper/db/v4/adapter/cockroachdb"
-)
-
-// The settings variable stores connection details.
-var settings = cockroachdb.ConnectionURL{
- Host: "localhost",
- Database: "bank",
- User: "maxroach",
- Options: map[string]string{
- // Secure node.
- "sslrootcert": "certs/ca.crt",
- "sslkey": "certs/client.maxroach.key",
- "sslcert": "certs/client.maxroach.crt",
- },
-}
-
-// Accounts is a handy way to represent a collection.
-func Accounts(sess db.Session) db.Store {
- return sess.Collection("accounts")
-}
-
-// Account is used to represent a single record in the "accounts" table.
-type Account struct {
- ID uint64 `db:"id,omitempty"`
- Balance int64 `db:"balance"`
-}
-
-// Collection is required in order to create a relation between the Account
-// struct and the "accounts" table.
-func (a *Account) Store(sess db.Session) db.Store {
- return Accounts(sess)
-}
-
-// createTables creates all the tables that are neccessary to run this example.
-func createTables(sess db.Session) error {
- _, err := sess.SQL().Exec(`
- CREATE TABLE IF NOT EXISTS accounts (
- ID SERIAL PRIMARY KEY,
- balance INT
- )
- `)
- if err != nil {
- return err
- }
- return nil
-}
-
-// crdbForceRetry can be used to simulate a transaction error and
-// demonstrate upper/db's ability to retry the transaction automatically.
-//
-// By default, upper/db will retry the transaction five times, if you want
-// to modify this number use: sess.SetMaxTransactionRetries(n).
-//
-// This is only used for demonstration purposes and not intended
-// for production code.
-func crdbForceRetry(sess db.Session) error {
- var err error
-
- // The first statement in a transaction can be retried transparently on the
- // server, so we need to add a placeholder statement so that our
- // force_retry() statement isn't the first one.
- _, err = sess.SQL().Exec(`SELECT 1`)
- if err != nil {
- return err
- }
-
- // If force_retry is called during the specified interval from the beginning
- // of the transaction it returns a retryable error. If not, 0 is returned
- // instead of an error.
- _, err = sess.SQL().Exec(`SELECT crdb_internal.force_retry('1s'::INTERVAL)`)
- if err != nil {
- return err
- }
-
- return nil
-}
-
-func main() {
- // Connect to the local CockroachDB node.
- sess, err := cockroachdb.Open(settings)
- if err != nil {
- log.Fatal("cockroachdb.Open: ", err)
- }
- defer sess.Close()
-
- // Adjust this number to fit your specific needs (set to 5, by default)
- // sess.SetMaxTransactionRetries(10)
-
- // Create the "accounts" table
- createTables(sess)
-
- // Delete all the previous items in the "accounts" table.
- err = Accounts(sess).Truncate()
- if err != nil {
- log.Fatal("Truncate: ", err)
- }
-
- // Create a new account with a balance of 1000.
- account1 := Account{Balance: 1000}
- err = Accounts(sess).InsertReturning(&account1)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Create a new account with a balance of 250.
- account2 := Account{Balance: 250}
- err = Accounts(sess).InsertReturning(&account2)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Printing records
- printRecords(sess)
-
- // Change the balance of the first account.
- account1.Balance = 500
- err = sess.Save(&account1)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Change the balance of the second account.
- account2.Balance = 999
- err = sess.Save(&account2)
- if err != nil {
- log.Fatal("sess.Save: ", err)
- }
-
- // Printing records
- printRecords(sess)
-
- // Delete the first record.
- err = sess.Delete(&account1)
- if err != nil {
- log.Fatal("Delete: ", err)
- }
-
- startTime := time.Now()
-
- // Add a couple of new records within a transaction.
- err = sess.Tx(func(tx db.Session) error {
- var err error
-
- if err = tx.Save(&Account{Balance: 887}); err != nil {
- return err
- }
-
- if time.Now().Sub(startTime) < time.Second*1 {
- // Will fail continuously for 2 seconds.
- if err = crdbForceRetry(tx); err != nil {
- return err
- }
- }
-
- if err = tx.Save(&Account{Balance: 342}); err != nil {
- return err
- }
-
- return nil
- })
- if err != nil {
- log.Fatal("Could not commit transaction: ", err)
- }
-
- // Printing records
- printRecords(sess)
-}
-
-func printRecords(sess db.Session) {
- accounts := []Account{}
- err := Accounts(sess).Find().All(&accounts)
- if err != nil {
- log.Fatal("Find: ", err)
- }
- log.Printf("Balances:")
- for i := range accounts {
- fmt.Printf("\taccounts[%d]: %d\n", accounts[i].ID, accounts[i].Balance)
- }
-}
diff --git a/src/current/_includes/v22.1/backups/advanced-examples-list.md b/src/current/_includes/v22.1/backups/advanced-examples-list.md
deleted file mode 100644
index fb519d7bfe0..00000000000
--- a/src/current/_includes/v22.1/backups/advanced-examples-list.md
+++ /dev/null
@@ -1,11 +0,0 @@
-For examples of advanced `BACKUP` and `RESTORE` use cases, see:
-
-- [Incremental backups with a specified destination](take-full-and-incremental-backups.html#incremental-backups-with-explicitly-specified-destinations)
-- [Backup with revision history and point-in-time restore](take-backups-with-revision-history-and-restore-from-a-point-in-time.html)
-- [Locality-aware backup and restore](take-and-restore-locality-aware-backups.html)
-- [Encrypted backup and restore](take-and-restore-encrypted-backups.html)
-- [Restore into a different database](restore.html#restore-tables-into-a-different-database)
-- [Remove the foreign key before restore](restore.html#remove-the-foreign-key-before-restore)
-- [Restoring users from `system.users` backup](restore.html#restoring-users-from-system-users-backup)
-- [Show an incremental backup at a different location](show-backup.html#show-a-backup-taken-with-the-incremental-location-option)
-- [Exclude a table's data from backups](take-full-and-incremental-backups.html#exclude-a-tables-data-from-backups)
diff --git a/src/current/_includes/v22.1/backups/aws-auth-note.md b/src/current/_includes/v22.1/backups/aws-auth-note.md
deleted file mode 100644
index 759a8ad1d3a..00000000000
--- a/src/current/_includes/v22.1/backups/aws-auth-note.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-The examples in this section use the **default** `AUTH=specified` parameter. For more detail on how to use `implicit` authentication with Amazon S3 buckets, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/backups/azure-url-encode.md b/src/current/_includes/v22.1/backups/azure-url-encode.md
deleted file mode 100644
index 41036bfea3d..00000000000
--- a/src/current/_includes/v22.1/backups/azure-url-encode.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-Azure storage containers **require** a [url encoded](https://en.wikipedia.org/wiki/Percent-encoding) `ACCOUNT_KEY` since it is base64-encoded and may contain +, /, = characters. For more detail on how to pass your Azure Storage credentials with this parameter, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication).
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/backups/backup-options.md b/src/current/_includes/v22.1/backups/backup-options.md
deleted file mode 100644
index 2c6f112f38f..00000000000
--- a/src/current/_includes/v22.1/backups/backup-options.md
+++ /dev/null
@@ -1,7 +0,0 @@
- Option | Value | Description
------------------------------------------------------------------+-------------------------+------------------------------
-`revision_history` | N/A | Create a backup with full [revision history](take-backups-with-revision-history-and-restore-from-a-point-in-time.html), which records every change made to the cluster within the garbage collection period leading up to and including the given timestamp.
-`encryption_passphrase` | [`STRING`](string.html) | The passphrase used to [encrypt the files](take-and-restore-encrypted-backups.html) (`BACKUP` manifest and data files) that the `BACKUP` statement generates. This same passphrase is needed to decrypt the file when it is used to [restore](take-and-restore-encrypted-backups.html) and to list the contents of the backup when using [`SHOW BACKUP`](show-backup.html). There is no practical limit on the length of the passphrase.
-`DETACHED` | N/A | When a backup runs in `DETACHED` mode, it will execute asynchronously. The job ID will be returned after the backup [job creation](backup-architecture.html#job-creation-phase) completes. Note that with `DETACHED` specified, further job information and the job completion status will not be returned. For more on the differences between the returned job data, see the [example](backup.html#run-a-backup-asynchronously) below. To check on the job status, use the [`SHOW JOBS`](show-jobs.html) statement.
To run a backup within a [transaction](transactions.html), use the `DETACHED` option.
-`kms` | [`STRING`](string.html) | The [key management service (KMS) URI](take-and-restore-encrypted-backups.html#uri-formats) (or a [comma-separated list of URIs](take-and-restore-encrypted-backups.html#take-a-backup-with-multi-region-encryption)) used to encrypt the files (`BACKUP` manifest and data files) that the `BACKUP` statement generates. This same KMS URI is needed to decrypt the file when it is used to [restore](take-and-restore-encrypted-backups.html#restore-from-an-encrypted-backup) and to list the contents of the backup when using [`SHOW BACKUP`](show-backup.html).
Currently, AWS KMS and Google Cloud KMS are supported.
-`incremental_location` | [`STRING`](string.html) | Create an incremental backup in a different location than the default incremental backup location.
See [Incremental backups with explicitly specified destinations](take-full-and-incremental-backups.html#incremental-backups-with-explicitly-specified-destinations) for usage.
diff --git a/src/current/_includes/v22.1/backups/backup-to-deprec.md b/src/current/_includes/v22.1/backups/backup-to-deprec.md
deleted file mode 100644
index 1515e96c713..00000000000
--- a/src/current/_includes/v22.1/backups/backup-to-deprec.md
+++ /dev/null
@@ -1,7 +0,0 @@
-{{site.data.alerts.callout_danger}}
-The `BACKUP ... TO` and `RESTORE ... FROM` syntax is **deprecated** as of v22.1 and will be removed in a future release.
-
-We recommend using the `BACKUP ... INTO {collectionURI}` syntax, which creates or adds to a [backup collection]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#backup-collections) in your storage location. For restoring backups, we recommend using `RESTORE FROM {backup} IN {collectionURI}` with `{backup}` being [`LATEST`]({% link {{ page.version.version }}/restore.md %}#restore-the-most-recent-backup) or a specific [subdirectory]({% link {{ page.version.version }}/restore.md %}#subdir-param).
-
-For guidance on the syntax for backups and restores, see the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}#examples) and [`RESTORE`]({% link {{ page.version.version }}/restore.md %}#examples) examples.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/backups/bulk-auth-options.md b/src/current/_includes/v22.1/backups/bulk-auth-options.md
deleted file mode 100644
index ab02410dcac..00000000000
--- a/src/current/_includes/v22.1/backups/bulk-auth-options.md
+++ /dev/null
@@ -1,4 +0,0 @@
-The following examples make use of:
-
-- Amazon S3 connection strings. For guidance on connecting to other storage options or using other authentication parameters instead, read [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html#example-file-urls).
-- The **default** `AUTH=specified` parameter. For guidance on using `AUTH=implicit` authentication with Amazon S3 buckets instead, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication).
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/backups/destination-file-privileges.md b/src/current/_includes/v22.1/backups/destination-file-privileges.md
deleted file mode 100644
index 913e042461c..00000000000
--- a/src/current/_includes/v22.1/backups/destination-file-privileges.md
+++ /dev/null
@@ -1,12 +0,0 @@
-The destination file URL does **not** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios:
-
-- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default.
-- [Userfile](use-userfile-for-bulk-operations.html)
-
-The destination file URL **does** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios:
-
-- S3 or GS using `IMPLICIT` credentials
-- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3
-- [Nodelocal](cockroach-nodelocal-upload.html)
-
-We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html).
diff --git a/src/current/_includes/v22.1/backups/encrypted-backup-description.md b/src/current/_includes/v22.1/backups/encrypted-backup-description.md
deleted file mode 100644
index f0c39d2551a..00000000000
--- a/src/current/_includes/v22.1/backups/encrypted-backup-description.md
+++ /dev/null
@@ -1,11 +0,0 @@
-You can encrypt full or incremental backups with a passphrase by using the [`encryption_passphrase` option](backup.html#with-encryption-passphrase). Files written by the backup (including `BACKUP` manifests and data files) are encrypted using the specified passphrase to derive a key. To restore the encrypted backup, the same `encryption_passphrase` option (with the same passphrase) must be included in the [`RESTORE`](restore.html) statement.
-
-When used with [incremental backups](take-full-and-incremental-backups.html#incremental-backups), the `encryption_passphrase` option is applied to all the [backup file URLs](backup.html#backup-file-urls), which means the same passphrase must be used when appending another incremental backup to an existing backup. Similarly, when used with [locality-aware backups](take-and-restore-locality-aware-backups.html), the passphrase provided is applied to files in all localities.
-
-Encryption is done using [AES-256-GCM](https://en.wikipedia.org/wiki/Galois/Counter_Mode), and GCM is used to both encrypt and authenticate the files. A random [salt](https://en.wikipedia.org/wiki/Salt_(cryptography)) is used to derive a once-per-backup [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) key from the specified passphrase, and then a random [initialization vector](https://en.wikipedia.org/wiki/Initialization_vector) is used per-file. CockroachDB uses [PBKDF2](https://en.wikipedia.org/wiki/PBKDF2) with 64,000 iterations for the key derivation.
-
-{{site.data.alerts.callout_info}}
-`BACKUP` and `RESTORE` will use more memory when using encryption, as both the plain-text and cipher-text of a given file are held in memory during encryption and decryption.
-{{site.data.alerts.end}}
-
-For an example of an encrypted backup, see [Create an encrypted backup](take-and-restore-encrypted-backups.html#take-an-encrypted-backup-using-a-passphrase).
diff --git a/src/current/_includes/v22.1/backups/file-size-setting.md b/src/current/_includes/v22.1/backups/file-size-setting.md
deleted file mode 100644
index 8f94d415e11..00000000000
--- a/src/current/_includes/v22.1/backups/file-size-setting.md
+++ /dev/null
@@ -1,5 +0,0 @@
-{{site.data.alerts.callout_info}}
-To set a target for the amount of backup data written to each backup file, use the `bulkio.backup.file_size` [cluster setting](cluster-settings.html).
-
-See the [`SET CLUSTER SETTING`](set-cluster-setting.html) page for more details on using cluster settings.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/backups/gcs-auth-note.md b/src/current/_includes/v22.1/backups/gcs-auth-note.md
deleted file mode 100644
index 360ea21cb63..00000000000
--- a/src/current/_includes/v22.1/backups/gcs-auth-note.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-The examples in this section use the `AUTH=specified` parameter, which will be the default behavior in v21.2 and beyond for connecting to Google Cloud Storage. For more detail on how to pass your Google Cloud Storage credentials with this parameter, or, how to use `implicit` authentication, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/backups/gcs-default-deprec.md b/src/current/_includes/v22.1/backups/gcs-default-deprec.md
deleted file mode 100644
index aafea15e804..00000000000
--- a/src/current/_includes/v22.1/backups/gcs-default-deprec.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-**Deprecation notice:** Currently, GCS connections default to the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html). This default behavior will no longer be supported in v21.2. If you are relying on this default behavior, we recommend adjusting your queries and scripts to now specify the `AUTH` parameter you want to use. Similarly, if you are using the `cloudstorage.gs.default.key` cluster setting to authorize your GCS connection, we recommend switching to use `AUTH=specified` or `AUTH=implicit`. `AUTH=specified` will be the default behavior in v21.2 and beyond.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/backups/no-incremental-restore.md b/src/current/_includes/v22.1/backups/no-incremental-restore.md
deleted file mode 100644
index b2f071c1e5e..00000000000
--- a/src/current/_includes/v22.1/backups/no-incremental-restore.md
+++ /dev/null
@@ -1 +0,0 @@
-When you restore from an incremental backup, you're restoring the **entire** table, database, or cluster. CockroachDB uses both the latest (or a [specific](restore.html#restore-a-specific-backup)) incremental backup and the full backup during this process. You cannot restore an incremental backup without a full backup. Furthermore, it is not possible to restore over a [table](restore.html#tables), [database](restore.html#databases), or [cluster](restore.html#full-cluster) with existing data. See [Restore types](restore.html#restore-types) for detail on the types of backups you can restore.
diff --git a/src/current/_includes/v22.1/backups/retry-failure.md b/src/current/_includes/v22.1/backups/retry-failure.md
deleted file mode 100644
index 81740c0a27d..00000000000
--- a/src/current/_includes/v22.1/backups/retry-failure.md
+++ /dev/null
@@ -1 +0,0 @@
-If a backup job encounters too many retryable errors, it will enter a [`failed` state](show-jobs.html#job-status) with the most recent error, which allows subsequent backups the chance to succeed. Refer to [Set up monitoring for the backup schedule](manage-a-backup-schedule.html#set-up-monitoring-for-the-backup-schedule) for metrics to track backup failures.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/backups/show-backup-replace-diagram.html b/src/current/_includes/v22.1/backups/show-backup-replace-diagram.html
deleted file mode 100644
index 539b72b45da..00000000000
--- a/src/current/_includes/v22.1/backups/show-backup-replace-diagram.html
+++ /dev/null
@@ -1,50 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/cdc/avro-limitations.md b/src/current/_includes/v22.1/cdc/avro-limitations.md
deleted file mode 100644
index aa9cec6e5c3..00000000000
--- a/src/current/_includes/v22.1/cdc/avro-limitations.md
+++ /dev/null
@@ -1,29 +0,0 @@
-- [Decimals](decimal.html) must have precision specified.
-- [`BYTES`](bytes.html) (or its aliases `BYTEA` and `BLOB`) are often used to store machine-readable data. When you stream these types through a changefeed with [`format=avro`](create-changefeed.html#format), CockroachDB does not encode or change the data. However, Avro clients can often include escape sequences to present the data in a printable format, which can interfere with deserialization. A potential solution is to hex-encode `BYTES` values when initially inserting them into CockroachDB. This will ensure that Avro clients can consistently decode the hexadecimal. Note that hex-encoding values at insertion will increase record size.
-- [`BIT`](bit.html) and [`VARBIT`](bit.html) types are encoded as arrays of 64-bit integers.
-
- For efficiency, CockroachDB encodes `BIT` and `VARBIT` bitfield types as arrays of 64-bit integers. That is, [base-2 (binary format)](https://en.wikipedia.org/wiki/Binary_number#Conversion_to_and_from_other_numeral_systems) `BIT` and `VARBIT` data types are converted to base 10 and stored in arrays. Encoding in CockroachDB is [big-endian](https://en.wikipedia.org/wiki/Endianness), therefore the last value may have many trailing zeroes. For this reason, the first value of each array is the number of bits that are used in the last value of the array.
-
- For instance, if the bitfield is 129 bits long, there will be 4 integers in the array. The first integer will be `1`; representing the number of bits in the last value, the second integer will be the first 64 bits, the third integer will be bits 65–128, and the last integer will either be `0` or `9223372036854775808` (i.e., the integer with only the first bit set, or `1000000000000000000000000000000000000000000000000000000000000000` when base 2).
-
- This example is base-10 encoded into an array as follows:
-
- ~~~
- {"array": [1, , , 0 or 9223372036854775808]}
- ~~~
-
- For downstream processing, it is necessary to base-2 encode every element in the array (except for the first element). The first number in the array gives you the number of bits to take from the last base-2 number — that is, the most significant bits. So, in the example above this would be `1`. Finally, all the base-2 numbers can be appended together, which will result in the original number of bits, 129.
-
- In a different example of this process where the bitfield is 136 bits long, the array would be similar to the following when base-10 encoded:
-
- ~~~
- {"array": [8, 18293058736425533439, 18446744073709551615, 13690942867206307840]}
- ~~~
-
- To then work with this data, you would convert each of the elements in the array to base-2 numbers, besides the first element. For the above array, this would convert to:
-
- ~~~
- [8, 1111110111011011111111111111111111111111111111111111111111111111, 1111111111111111111111111111111111111111111111111111111111111111, 1011111000000000000000000000000000000000000000000000000000000000]
- ~~~
-
- Next, you use the first element in the array to take the number of bits from the last base-2 element, `10111110`. Finally, you append each of the base-2 numbers together — in the above array, the second, third, and truncated last element. This results in 136 bits, the original number of bits.
diff --git a/src/current/_includes/v22.1/cdc/cdc-cloud-rangefeed.md b/src/current/_includes/v22.1/cdc/cdc-cloud-rangefeed.md
deleted file mode 100644
index 85e6255848e..00000000000
--- a/src/current/_includes/v22.1/cdc/cdc-cloud-rangefeed.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-If you are working on a CockroachDB {{ site.data.products.serverless }} cluster, the `kv.rangefeed.enabled` cluster setting is enabled by default.
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/cdc/client-key-encryption.md b/src/current/_includes/v22.1/cdc/client-key-encryption.md
deleted file mode 100644
index c7c7be4c38c..00000000000
--- a/src/current/_includes/v22.1/cdc/client-key-encryption.md
+++ /dev/null
@@ -1 +0,0 @@
-**Note:** Client keys are often encrypted. You will receive an error if you pass an encrypted client key in your changefeed statement. To decrypt the client key, run: `openssl rsa -in key.pem -out key.decrypt.pem -passin pass:{PASSWORD}`. Once decrypted, be sure to update your changefeed statement to use the new `key.decrypt.pem` file instead.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/cdc/configure-all-changefeed.md b/src/current/_includes/v22.1/cdc/configure-all-changefeed.md
deleted file mode 100644
index b2d87c8cd5e..00000000000
--- a/src/current/_includes/v22.1/cdc/configure-all-changefeed.md
+++ /dev/null
@@ -1,19 +0,0 @@
-It is useful to be able to pause all running changefeeds during troubleshooting, testing, or when a decrease in CPU load is needed.
-
-To pause all running changefeeds:
-
-{% include_cached copy-clipboard.html %}
-~~~sql
-PAUSE JOBS (WITH x AS (SHOW CHANGEFEED JOBS) SELECT job_id FROM x WHERE status = ('running'));
-~~~
-
-This will change the status for each of the running changefeeds to `paused`, which can be verified with [`SHOW CHANGEFEED JOBS`](show-jobs.html#show-changefeed-jobs).
-
-To resume all running changefeeds:
-
-{% include_cached copy-clipboard.html %}
-~~~sql
-RESUME JOBS (WITH x AS (SHOW CHANGEFEED JOBS) SELECT job_id FROM x WHERE status = ('paused'));
-~~~
-
-This will resume the changefeeds and update the status for each of the changefeeds to `running`.
diff --git a/src/current/_includes/v22.1/cdc/confluent-cloud-sr-url.md b/src/current/_includes/v22.1/cdc/confluent-cloud-sr-url.md
deleted file mode 100644
index 556adbd7bff..00000000000
--- a/src/current/_includes/v22.1/cdc/confluent-cloud-sr-url.md
+++ /dev/null
@@ -1 +0,0 @@
-To connect to Confluent Cloud, use the following URL structure: `'https://{API_KEY_ID}:{API_SECRET_URL_ENCODED}@{CONFLUENT_REGISTRY_URL}:443'`. See the [Stream a Changefeed to a Confluent Cloud Kafka Cluster](stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.html#step-8-create-a-changefeed) tutorial for further detail.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/cdc/core-csv.md b/src/current/_includes/v22.1/cdc/core-csv.md
deleted file mode 100644
index 4ee6bfc587d..00000000000
--- a/src/current/_includes/v22.1/cdc/core-csv.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-To determine how wide the columns need to be, the default `table` display format in `cockroach sql` buffers the results it receives from the server before printing them to the console. When consuming core changefeed data using `cockroach sql`, it's important to use a display format like `csv` that does not buffer its results. To set the display format, use the [`--format=csv` flag](cockroach-sql.html#sql-flag-format) when starting the [built-in SQL client](cockroach-sql.html), or set the [`\set display_format=csv` option](cockroach-sql.html#client-side-options) once the SQL client is open.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/cdc/core-url.md b/src/current/_includes/v22.1/cdc/core-url.md
deleted file mode 100644
index 7241e203aa7..00000000000
--- a/src/current/_includes/v22.1/cdc/core-url.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a [`CANCEL QUERY`](cancel-query.html) statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/cdc/create-core-changefeed-avro.md b/src/current/_includes/v22.1/cdc/create-core-changefeed-avro.md
deleted file mode 100644
index 14051253a22..00000000000
--- a/src/current/_includes/v22.1/cdc/create-core-changefeed-avro.md
+++ /dev/null
@@ -1,122 +0,0 @@
-In this example, you'll set up a core changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html) to store Avro schemas.
-
-1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach start-single-node \
- --insecure \
- --listen-addr=localhost \
- --background
- ~~~
-
-2. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/).
-
-3. Move into the extracted `confluent-` directory and start Confluent:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ ./bin/confluent local services start
- ~~~
-
- Only `zookeeper`, `kafka`, and `schema-registry` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives) and the [Quick Start Guide](https://docs.confluent.io/platform/current/quickstart/ce-quickstart.html#ce-quickstart).
-
-4. As the `root` user, open the [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --url="postgresql://root@127.0.0.1:26257?sslmode=disable" --format=csv
- ~~~
-
- {% include {{ page.version.version }}/cdc/core-url.md %}
-
- {% include {{ page.version.version }}/cdc/core-csv.md %}
-
-5. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING kv.rangefeed.enabled = true;
- ~~~
-
-6. Create table `bar`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bar (a INT PRIMARY KEY);
- ~~~
-
-7. Insert a row into the table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bar VALUES (0);
- ~~~
-
-8. Start the core changefeed:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > EXPERIMENTAL CHANGEFEED FOR bar WITH format = avro, confluent_schema_registry = 'http://localhost:8081';
- ~~~
-
- ~~~
- table,key,value
- bar,\000\000\000\000\001\002\000,\000\000\000\000\002\002\002\000
- ~~~
-
-9. In a new terminal, add another row:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure -e "INSERT INTO bar VALUES (1)"
- ~~~
-
-10. Back in the terminal where the core changefeed is streaming, the output will appear:
-
- ~~~
- bar,\000\000\000\000\001\002\002,\000\000\000\000\002\002\002\002
- ~~~
-
- Note that records may take a couple of seconds to display in the core changefeed.
-
-11. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running.
-
-12. To stop `cockroach`:
-
- Get the process ID of the node:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ps -ef | grep cockroach | grep -v grep
- ~~~
-
- ~~~
- 501 21766 1 0 6:21PM ttys001 0:00.89 cockroach start-single-node --insecure --listen-addr=localhost
- ~~~
-
- Gracefully shut down the node, specifying its process ID:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- kill -TERM 21766
- ~~~
-
- ~~~
- initiating graceful shutdown of server
- server drained and shutdown completed
- ~~~
-
-13. To stop Confluent, move into the extracted `confluent-` directory and stop Confluent:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ ./bin/confluent local services stop
- ~~~
-
- To terminate all Confluent processes, use:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ ./bin/confluent local destroy
- ~~~
diff --git a/src/current/_includes/v22.1/cdc/create-core-changefeed.md b/src/current/_includes/v22.1/cdc/create-core-changefeed.md
deleted file mode 100644
index fa397cd36f5..00000000000
--- a/src/current/_includes/v22.1/cdc/create-core-changefeed.md
+++ /dev/null
@@ -1,98 +0,0 @@
-In this example, you'll set up a core changefeed for a single-node cluster.
-
-1. In a terminal window, start `cockroach`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach start-single-node \
- --insecure \
- --listen-addr=localhost \
- --background
- ~~~
-
-2. As the `root` user, open the [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --url="postgresql://root@127.0.0.1:26257?sslmode=disable" \
- --format=csv
- ~~~
-
- {% include {{ page.version.version }}/cdc/core-url.md %}
-
- {% include {{ page.version.version }}/cdc/core-csv.md %}
-
-3. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING kv.rangefeed.enabled = true;
- ~~~
-
-4. Create table `foo`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE foo (a INT PRIMARY KEY);
- ~~~
-
-5. Insert a row into the table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO foo VALUES (0);
- ~~~
-
-6. Start the core changefeed:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > EXPERIMENTAL CHANGEFEED FOR foo;
- ~~~
- ~~~
- table,key,value
- foo,[0],"{""after"": {""a"": 0}}"
- ~~~
-
-7. In a new terminal, add another row:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure -e "INSERT INTO foo VALUES (1)"
- ~~~
-
-8. Back in the terminal where the core changefeed is streaming, the following output has appeared:
-
- ~~~
- foo,[1],"{""after"": {""a"": 1}}"
- ~~~
-
- Note that records may take a couple of seconds to display in the core changefeed.
-
-9. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running.
-
-10. To stop `cockroach`:
-
- Get the process ID of the node:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- ps -ef | grep cockroach | grep -v grep
- ~~~
-
- ~~~
- 501 21766 1 0 6:21PM ttys001 0:00.89 cockroach start-single-node --insecure --listen-addr=localhost
- ~~~
-
- Gracefully shut down the node, specifying its process ID:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- kill -TERM 21766
- ~~~
-
- ~~~
- initiating graceful shutdown of server
- server drained and shutdown completed
- ~~~
diff --git a/src/current/_includes/v22.1/cdc/create-example-db-cdc.md b/src/current/_includes/v22.1/cdc/create-example-db-cdc.md
deleted file mode 100644
index 17902b10eac..00000000000
--- a/src/current/_includes/v22.1/cdc/create-example-db-cdc.md
+++ /dev/null
@@ -1,50 +0,0 @@
-1. Create a database called `cdc_demo`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE cdc_demo;
- ~~~
-
-1. Set the database as the default:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SET DATABASE = cdc_demo;
- ~~~
-
-1. Create a table and add data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE office_dogs (
- id INT PRIMARY KEY,
- name STRING);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO office_dogs VALUES
- (1, 'Petee'),
- (2, 'Carl');
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > UPDATE office_dogs SET name = 'Petee H' WHERE id = 1;
- ~~~
-
-1. Create another table and add data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE employees (
- dog_id INT REFERENCES office_dogs (id),
- employee_name STRING);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO employees VALUES
- (1, 'Lauren'),
- (2, 'Spencer');
- ~~~
diff --git a/src/current/_includes/v22.1/cdc/external-urls.md b/src/current/_includes/v22.1/cdc/external-urls.md
deleted file mode 100644
index f4aa029779a..00000000000
--- a/src/current/_includes/v22.1/cdc/external-urls.md
+++ /dev/null
@@ -1,48 +0,0 @@
-~~~
-[scheme]://[host]/[path]?[parameters]
-~~~
-
-Location | Scheme | Host | Parameters |
-|-------------------------------------------------------------+-------------+--------------------------------------------------+----------------------------------------------------------------------------
-Amazon | `s3` | Bucket name | `AUTH` [1](#considerations) (optional; can be `implicit` or `specified`), `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`
-Azure | `azure` | N/A (see [Example file URLs](#example-file-urls) | `AZURE_ACCOUNT_KEY`, `AZURE_ACCOUNT_NAME`
-Google Cloud [2](#considerations) | `gs` | Bucket name | `AUTH` (optional; can be `default`, `implicit`, or `specified`), `CREDENTIALS`
-HTTP [3](#considerations) | `http` | Remote host | N/A
-NFS/Local [4](#considerations) | `nodelocal` | `nodeID` or `self` [5](#considerations) (see [Example file URLs](#example-file-urls)) | N/A
-S3-compatible services [6](#considerations) | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION` [7](#considerations) (optional), `AWS_ENDPOINT`
-
-{{site.data.alerts.callout_info}}
-The location parameters often contain special characters that need to be URI-encoded. Use Javascript's [`encodeURIComponent`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [`url.QueryEscape`](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters.
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_info}}
-If your environment requires an HTTP or HTTPS proxy server for outgoing connections, you can set the standard `HTTP_PROXY` and `HTTPS_PROXY` environment variables when starting CockroachDB.
-
- If you cannot run a full proxy, you can disable external HTTP(S) access (as well as custom HTTP(S) endpoints) when performing bulk operations (e.g., [`BACKUP`](backup.html), [`RESTORE`](restore.html), etc.) by using the [`--external-io-disable-http` flag](cockroach-start.html#security). You can also disable the use of implicit credentials when accessing external cloud storage services for various bulk operations by using the [`--external-io-disable-implicit-credentials` flag](cockroach-start.html#security).
-{{site.data.alerts.end}}
-
-
-
-- 1 If the `AUTH` parameter is not provided, AWS connections default to `specified` and the access keys must be provided in the URI parameters. If the `AUTH` parameter is `implicit`, the access keys can be omitted and [the credentials will be loaded from the environment](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/).
-
-- 2 If the `AUTH` parameter is not specified, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) will be used if it is non-empty, otherwise the `implicit` behavior is used. If the `AUTH` parameter is `implicit`, all GCS connections use Google's [default authentication strategy](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). If the `AUTH` parameter is `default`, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) must be set to the contents of a [service account file](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) which will be used during authentication. If the `AUTH` parameter is `specified`, GCS connections are authenticated on a per-statement basis, which allows the JSON key object to be sent in the `CREDENTIALS` parameter. The JSON key object should be Base64-encoded (using the standard encoding in [RFC 4648](https://tools.ietf.org/html/rfc4648)).
-
-- 3 You can create your own HTTP server with [Caddy or nginx](use-a-local-file-server-for-bulk-operations.html). A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from HTTPS URLs.
-
-- 4 The file system backup location on the NFS drive is relative to the path specified by the `--external-io-dir` flag set while [starting the node](cockroach-start.html). If the flag is set to `disabled`, then imports from local directories and NFS drives are disabled.
-
-- 5 Using a `nodeID` is required and the data files will be in the `extern` directory of the specified node. In most cases (including single-node clusters), using `nodelocal://1/` is sufficient. Use `self` if you do not want to specify a `nodeID`, and the individual data files will be in the `extern` directories of arbitrary nodes; however, to work correctly, each node must have the [`--external-io-dir` flag](cockroach-start.html#general) point to the same NFS mount or other network-backed, shared storage.
-
-- 6 A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from an S3-compatible service.
-
-- 7 The `AWS_REGION` parameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it.
-
-#### Example file URLs
-
-Location | Example
--------------+----------------------------------------------------------------------------------
-Amazon S3 | `s3://acme-co/employees?AWS_ACCESS_KEY_ID=123&AWS_SECRET_ACCESS_KEY=456`
-Azure | `azure://employees?AZURE_ACCOUNT_KEY=123&AZURE_ACCOUNT_NAME=acme-co`
-Google Cloud | `gs://acme-co`
-HTTP | `http://localhost:8080/employees`
-NFS/Local | `nodelocal://1/path/employees`, `nodelocal://self/nfsmount/backups/employees` [5](#considerations)
diff --git a/src/current/_includes/v22.1/cdc/initial-scan-limit-alter-changefeed.md b/src/current/_includes/v22.1/cdc/initial-scan-limit-alter-changefeed.md
deleted file mode 100644
index feb0c8748e4..00000000000
--- a/src/current/_includes/v22.1/cdc/initial-scan-limit-alter-changefeed.md
+++ /dev/null
@@ -1,2 +0,0 @@
-You cannot use the new `initial_scan = "yes"/"no"/"only"` syntax with {% if page.name == "alter-changefeed.md" %} `ALTER CHANGEFEED` {% else %}
-[`ALTER CHANGEFEED`](alter-changefeed.html) {% endif %} in v22.1. To ensure that you can modify a changefeed with the `initial_scan` options, use the previous syntax of `initial_scan`, `no_initial_scan`, and `initial_scan_only`.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/cdc/metrics-labels.md b/src/current/_includes/v22.1/cdc/metrics-labels.md
deleted file mode 100644
index 398f6f72ca2..00000000000
--- a/src/current/_includes/v22.1/cdc/metrics-labels.md
+++ /dev/null
@@ -1,10 +0,0 @@
-To measure metrics per changefeed, you can define a "metrics label" for one or multiple changefeed(s). The changefeed(s) will increment each [changefeed metric](monitor-and-debug-changefeeds.html#metrics). Metrics label information is sent with time-series metrics to `http://{host}:{http-port}/_status/vars`, viewable via the [Prometheus endpoint](monitoring-and-alerting.html#prometheus-endpoint). An aggregated metric of all changefeeds is also measured.
-
-It is necessary to consider the following when applying metrics labels to changefeeds:
-
-- Metrics labels are **not** available in CockroachDB {{ site.data.products.cloud }}.
-- The `COCKROACH_EXPERIMENTAL_ENABLE_PER_CHANGEFEED_METRICS` [environment variable](cockroach-commands.html#environment-variables) must be specified to use this feature.
-- The `server.child_metrics.enabled` [cluster setting](cluster-settings.html) must be set to `true` before using the `metrics_label` option.
-- Metrics label information is sent to the `_status/vars` endpoint, but will **not** show up in [`debug.zip`](cockroach-debug-zip.html) or the [DB Console](ui-overview.html).
-- Introducing labels to isolate a changefeed's metrics can increase cardinality significantly. There is a limit of 1024 unique labels in place to prevent cardinality explosion. That is, when labels are applied to high-cardinality data (data with a higher number of unique values), each changefeed with a label then results in more metrics data to multiply together, which will grow over time. This will have an impact on performance as the metric-series data per changefeed quickly populates against its label.
-- The maximum length of a metrics label is 128 bytes.
diff --git a/src/current/_includes/v22.1/cdc/modify-changefeed.md b/src/current/_includes/v22.1/cdc/modify-changefeed.md
deleted file mode 100644
index 8ca39aff5ad..00000000000
--- a/src/current/_includes/v22.1/cdc/modify-changefeed.md
+++ /dev/null
@@ -1,9 +0,0 @@
-To modify an {{ site.data.products.enterprise }} changefeed, [pause](create-and-configure-changefeeds.html#pause) the job and then use:
-
-~~~ sql
-ALTER CHANGEFEED job_id {ADD table DROP table SET option UNSET option};
-~~~
-
-You can add new table targets, remove them, set new [changefeed options](create-changefeed.html#options), and unset them.
-
-For more information, see [`ALTER CHANGEFEED`](alter-changefeed.html).
diff --git a/src/current/_includes/v22.1/cdc/note-changefeed-message-page.md b/src/current/_includes/v22.1/cdc/note-changefeed-message-page.md
deleted file mode 100644
index d61d4299b43..00000000000
--- a/src/current/_includes/v22.1/cdc/note-changefeed-message-page.md
+++ /dev/null
@@ -1 +0,0 @@
-For an overview of the messages emitted from changefeeds, see the [Changefeed Messages](changefeed-messages.html) page.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/cdc/options-table-note.md b/src/current/_includes/v22.1/cdc/options-table-note.md
deleted file mode 100644
index 61a27aefcc0..00000000000
--- a/src/current/_includes/v22.1/cdc/options-table-note.md
+++ /dev/null
@@ -1 +0,0 @@
-This table shows the parameters for changefeeds to a specific sink. The `CREATE CHANGEFEED` page provides a list of all the available [options](create-changefeed.html#options).
diff --git a/src/current/_includes/v22.1/cdc/print-key.md b/src/current/_includes/v22.1/cdc/print-key.md
deleted file mode 100644
index ab0b0924d30..00000000000
--- a/src/current/_includes/v22.1/cdc/print-key.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-This example only prints the value. To print both the key and value of each message in the changefeed (e.g., to observe what happens with `DELETE`s), use the `--property print.key=true` flag.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/cdc/schema-registry-timeout.md b/src/current/_includes/v22.1/cdc/schema-registry-timeout.md
deleted file mode 100644
index a6571084ef3..00000000000
--- a/src/current/_includes/v22.1/cdc/schema-registry-timeout.md
+++ /dev/null
@@ -1 +0,0 @@
-Use the {% if page.name == "create-changefeed.md" %} `timeout={duration}` query parameter {% else %} [`timeout={duration}` query parameter](create-changefeed.html#confluent-registry) {% endif %}([duration string](https://pkg.go.dev/time#ParseDuration)) in your Confluent Schema Registry URI to change the default timeout for contacting the schema registry. By default, the timeout is 30 seconds.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/cdc/sql-cluster-settings-example.md b/src/current/_includes/v22.1/cdc/sql-cluster-settings-example.md
deleted file mode 100644
index e3e1025135a..00000000000
--- a/src/current/_includes/v22.1/cdc/sql-cluster-settings-example.md
+++ /dev/null
@@ -1,27 +0,0 @@
-1. As the `root` user, open the [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure
- ~~~
-
-1. Set your organization name and [{{ site.data.products.enterprise }} license](enterprise-licensing.html) key that you received via email:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING cluster.organization = '';
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING enterprise.license = '';
- ~~~
-
-1. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING kv.rangefeed.enabled = true;
- ~~~
-
- {% include {{ page.version.version }}/cdc/cdc-cloud-rangefeed.md %}
diff --git a/src/current/_includes/v22.1/cdc/url-encoding.md b/src/current/_includes/v22.1/cdc/url-encoding.md
deleted file mode 100644
index 2a681d7f913..00000000000
--- a/src/current/_includes/v22.1/cdc/url-encoding.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-Parameters should always be URI-encoded before they are included the changefeed's URI, as they often contain special characters. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/cdc/virtual-computed-column-cdc.md b/src/current/_includes/v22.1/cdc/virtual-computed-column-cdc.md
deleted file mode 100644
index cf1267c5206..00000000000
--- a/src/current/_includes/v22.1/cdc/virtual-computed-column-cdc.md
+++ /dev/null
@@ -1 +0,0 @@
-As of v22.1, changefeeds filter out [`VIRTUAL` computed columns](computed-columns.html) from events by default. This is a [backward-incompatible change](../releases/v22.1.html#v22-1-0-backward-incompatible-changes). To maintain the changefeed behavior in previous versions where [`NULL`](null-handling.html) values are emitted for virtual computed columns, see the [`virtual_columns`](create-changefeed.html#virtual-columns) option for more detail.
diff --git a/src/current/_includes/v22.1/cdc/webhook-beta.md b/src/current/_includes/v22.1/cdc/webhook-beta.md
deleted file mode 100644
index c1e0447742e..00000000000
--- a/src/current/_includes/v22.1/cdc/webhook-beta.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-The webhook sink is currently in **beta** — see [usage considerations](../{{ page.version.version }}/changefeed-sinks.html#webhook-sink), available [parameters](../{{ page.version.version }}/create-changefeed.html#parameters), and [options](../{{ page.version.version }}/create-changefeed.html#options) for more information.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/client-transaction-retry.md b/src/current/_includes/v22.1/client-transaction-retry.md
deleted file mode 100644
index 2cae1347a18..00000000000
--- a/src/current/_includes/v22.1/client-transaction-retry.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` [isolation level](transactions.html#isolation-levels), CockroachDB may require the client to [retry a transaction](transactions.html#transaction-retries) in case of read/write [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). CockroachDB provides a [generic retry function](transactions.html#client-side-intervention) that runs inside a transaction and retries it as needed. The code sample below shows how it is used.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/computed-columns/add-computed-column.md b/src/current/_includes/v22.1/computed-columns/add-computed-column.md
deleted file mode 100644
index 5eff580e575..00000000000
--- a/src/current/_includes/v22.1/computed-columns/add-computed-column.md
+++ /dev/null
@@ -1,55 +0,0 @@
-In this example, create a table:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE x (
- a INT NULL,
- b INT NULL AS (a * 2) STORED,
- c INT NULL AS (a + 4) STORED,
- FAMILY "primary" (a, b, rowid, c)
- );
-~~~
-
-Then, insert a row of data:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> INSERT INTO x VALUES (6);
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM x;
-~~~
-
-~~~
-+---+----+----+
-| a | b | c |
-+---+----+----+
-| 6 | 12 | 10 |
-+---+----+----+
-(1 row)
-~~~
-
-Now add another virtual computed column to the table:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE x ADD COLUMN d INT AS (a // 2) VIRTUAL;
-~~~
-
-The `d` column is added to the table and computed from the `a` column divided by 2.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM x;
-~~~
-
-~~~
-+---+----+----+---+
-| a | b | c | d |
-+---+----+----+---+
-| 6 | 12 | 10 | 3 |
-+---+----+----+---+
-(1 row)
-~~~
diff --git a/src/current/_includes/v22.1/computed-columns/alter-computed-column.md b/src/current/_includes/v22.1/computed-columns/alter-computed-column.md
deleted file mode 100644
index 0c554f1c630..00000000000
--- a/src/current/_includes/v22.1/computed-columns/alter-computed-column.md
+++ /dev/null
@@ -1,76 +0,0 @@
-To alter the formula for a computed column, you must [`DROP`](drop-column.html) and [`ADD`](add-column.html) the column back with the new definition. Take the following table for instance:
-
-{% include_cached copy-clipboard.html %}
-~~~sql
-> CREATE TABLE x (
-a INT NULL,
-b INT NULL AS (a * 2) STORED,
-c INT NULL AS (a + 4) STORED,
-FAMILY "primary" (a, b, rowid, c)
-);
-~~~
-~~~
-CREATE TABLE
-
-
-Time: 4ms total (execution 4ms / network 0ms)
-~~~
-
-Add a computed column `d`:
-
-{% include_cached copy-clipboard.html %}
-~~~sql
-> ALTER TABLE x ADD COLUMN d INT AS (a // 2) STORED;
-~~~
-~~~
-ALTER TABLE
-
-
-Time: 199ms total (execution 199ms / network 0ms)
-~~~
-
-If you try to alter it, you'll get an error:
-
-{% include_cached copy-clipboard.html %}
-~~~sql
-> ALTER TABLE x ALTER COLUMN d INT AS (a // 3) STORED;
-~~~
-~~~
-invalid syntax: statement ignored: at or near "int": syntax error
-SQLSTATE: 42601
-DETAIL: source SQL:
-ALTER TABLE x ALTER COLUMN d INT AS (a // 3) STORED
- ^
-HINT: try \h ALTER TABLE
-~~~
-
-However, you can drop it and then add it with the new definition:
-
-{% include_cached copy-clipboard.html %}
-~~~sql
-> SET sql_safe_updates = false;
-> ALTER TABLE x DROP COLUMN d;
-> ALTER TABLE x ADD COLUMN d INT AS (a // 3) STORED;
-> SET sql_safe_updates = true;
-~~~
-~~~
-SET
-
-
-Time: 1ms total (execution 0ms / network 0ms)
-
-ALTER TABLE
-
-
-Time: 195ms total (execution 195ms / network 0ms)
-
-ALTER TABLE
-
-
-Time: 186ms total (execution 185ms / network 0ms)
-
-SET
-
-
-Time: 0ms total (execution 0ms / network 0ms)
-~~~
diff --git a/src/current/_includes/v22.1/computed-columns/convert-computed-column.md b/src/current/_includes/v22.1/computed-columns/convert-computed-column.md
deleted file mode 100644
index 2c9897b8319..00000000000
--- a/src/current/_includes/v22.1/computed-columns/convert-computed-column.md
+++ /dev/null
@@ -1,108 +0,0 @@
-You can convert a stored, computed column into a regular column by using `ALTER TABLE`.
-
-In this example, create a simple table with a computed column:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE office_dogs (
- id INT PRIMARY KEY,
- first_name STRING,
- last_name STRING,
- full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED
- );
-~~~
-
-Then, insert a few rows of data:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> INSERT INTO office_dogs (id, first_name, last_name) VALUES
- (1, 'Petee', 'Hirata'),
- (2, 'Carl', 'Kimball'),
- (3, 'Ernie', 'Narayan');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM office_dogs;
-~~~
-
-~~~
-+----+------------+-----------+---------------+
-| id | first_name | last_name | full_name |
-+----+------------+-----------+---------------+
-| 1 | Petee | Hirata | Petee Hirata |
-| 2 | Carl | Kimball | Carl Kimball |
-| 3 | Ernie | Narayan | Ernie Narayan |
-+----+------------+-----------+---------------+
-(3 rows)
-~~~
-
-The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). You can view the column details with the [`SHOW COLUMNS`](show-columns.html) statement:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM office_dogs;
-~~~
-
-~~~
-+-------------+-----------+-------------+----------------+------------------------------------+-------------+
-| column_name | data_type | is_nullable | column_default | generation_expression | indices |
-+-------------+-----------+-------------+----------------+------------------------------------+-------------+
-| id | INT | false | NULL | | {"primary"} |
-| first_name | STRING | true | NULL | | {} |
-| last_name | STRING | true | NULL | | {} |
-| full_name | STRING | true | NULL | concat(first_name, ' ', last_name) | {} |
-+-------------+-----------+-------------+----------------+------------------------------------+-------------+
-(4 rows)
-~~~
-
-Now, convert the computed column (`full_name`) to a regular column:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE office_dogs ALTER COLUMN full_name DROP STORED;
-~~~
-
-Check that the computed column was converted:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM office_dogs;
-~~~
-
-~~~
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-| column_name | data_type | is_nullable | column_default | generation_expression | indices |
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-| id | INT | false | NULL | | {"primary"} |
-| first_name | STRING | true | NULL | | {} |
-| last_name | STRING | true | NULL | | {} |
-| full_name | STRING | true | NULL | | {} |
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-(4 rows)
-~~~
-
-The computed column is now a regular column and can be updated as such:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> INSERT INTO office_dogs (id, first_name, last_name, full_name) VALUES (4, 'Lola', 'McDog', 'This is not computed');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM office_dogs;
-~~~
-
-~~~
-+----+------------+-----------+----------------------+
-| id | first_name | last_name | full_name |
-+----+------------+-----------+----------------------+
-| 1 | Petee | Hirata | Petee Hirata |
-| 2 | Carl | Kimball | Carl Kimball |
-| 3 | Ernie | Narayan | Ernie Narayan |
-| 4 | Lola | McDog | This is not computed |
-+----+------------+-----------+----------------------+
-(4 rows)
-~~~
diff --git a/src/current/_includes/v22.1/computed-columns/jsonb.md b/src/current/_includes/v22.1/computed-columns/jsonb.md
deleted file mode 100644
index 6b0ca92f80c..00000000000
--- a/src/current/_includes/v22.1/computed-columns/jsonb.md
+++ /dev/null
@@ -1,70 +0,0 @@
-In this example, create a table with a `JSONB` column and a stored computed column:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE student_profiles (
- id STRING PRIMARY KEY AS (profile->>'id') STORED,
- profile JSONB
-);
-~~~
-
-Create a compute column after you create a table:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE student_profiles ADD COLUMN age INT AS ( (profile->>'age')::INT) STORED;
-~~~
-
-Then, insert a few rows of data:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> INSERT INTO student_profiles (profile) VALUES
- ('{"id": "d78236", "name": "Arthur Read", "age": "16", "school": "PVPHS", "credits": 120, "sports": "none"}'),
- ('{"name": "Buster Bunny", "age": "15", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'),
- ('{"name": "Ernie Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM student_profiles;
-~~~
-~~~
-+--------+---------------------------------------------------------------------------------------------------------------------+------+
-| id | profile | age |
----------+---------------------------------------------------------------------------------------------------------------------+------+
-| d78236 | {"age": "16", "credits": 120, "id": "d78236", "name": "Arthur Read", "school": "PVPHS", "sports": "none"} | 16 |
-| f98112 | {"age": "15", "clubs": "MUN", "credits": 67, "id": "f98112", "name": "Buster Bunny", "school": "THS"} | 15 |
-| t63512 | {"clubs": "Chess", "id": "t63512", "name": "Ernie Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} | NULL |
-+--------+---------------------------------------------------------------------------------------------------------------------+------|
-~~~
-
-The primary key `id` is computed as a field from the `profile` column. Additionally the `age` column is computed from the profile column data as well.
-
-This example shows how add a stored computed column with a [coerced type](scalar-expressions.html#explicit-type-coercions):
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-CREATE TABLE json_data (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- json_info JSONB
-);
-INSERT INTO json_data (json_info) VALUES ('{"amount": "123.45"}');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-ALTER TABLE json_data ADD COLUMN amount DECIMAL AS ((json_info->>'amount')::DECIMAL) STORED;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT * FROM json_data;
-~~~
-
-~~~
- id | json_info | amount
----------------------------------------+----------------------+---------
- e7c3d706-1367-4d77-bfb4-386dfdeb10f9 | {"amount": "123.45"} | 123.45
-(1 row)
-~~~
diff --git a/src/current/_includes/v22.1/computed-columns/secondary-index.md b/src/current/_includes/v22.1/computed-columns/secondary-index.md
deleted file mode 100644
index 8b78325e695..00000000000
--- a/src/current/_includes/v22.1/computed-columns/secondary-index.md
+++ /dev/null
@@ -1,63 +0,0 @@
-In this example, create a table with a virtual computed column and an index on that column:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE gymnastics (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- athlete STRING,
- vault DECIMAL,
- bars DECIMAL,
- beam DECIMAL,
- floor DECIMAL,
- combined_score DECIMAL AS (vault + bars + beam + floor) VIRTUAL,
- INDEX total (combined_score DESC)
- );
-~~~
-
-Then, insert a few rows a data:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> INSERT INTO gymnastics (athlete, vault, bars, beam, floor) VALUES
- ('Simone Biles', 15.933, 14.800, 15.300, 15.800),
- ('Gabby Douglas', 0, 15.766, 0, 0),
- ('Laurie Hernandez', 15.100, 0, 15.233, 14.833),
- ('Madison Kocian', 0, 15.933, 0, 0),
- ('Aly Raisman', 15.833, 0, 15.000, 15.366);
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM gymnastics;
-~~~
-~~~
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-| id | athlete | vault | bars | beam | floor | combined_score |
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-| 3fe11371-6a6a-49de-bbef-a8dd16560fac | Aly Raisman | 15.833 | 0 | 15.000 | 15.366 | 46.199 |
-| 56055a70-b4c7-4522-909b-8f3674b705e5 | Madison Kocian | 0 | 15.933 | 0 | 0 | 15.933 |
-| 69f73fd1-da34-48bf-aff8-71296ce4c2c7 | Gabby Douglas | 0 | 15.766 | 0 | 0 | 15.766 |
-| 8a7b730b-668d-4845-8d25-48bda25114d6 | Laurie Hernandez | 15.100 | 0 | 15.233 | 14.833 | 45.166 |
-| b2c5ca80-21c2-4853-9178-b96ce220ea4d | Simone Biles | 15.933 | 14.800 | 15.300 | 15.800 | 61.833 |
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-~~~
-
-Now, run a query using the secondary index:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT athlete, combined_score FROM gymnastics ORDER BY combined_score DESC;
-~~~
-~~~
-+------------------+----------------+
-| athlete | combined_score |
-+------------------+----------------+
-| Simone Biles | 61.833 |
-| Aly Raisman | 46.199 |
-| Laurie Hernandez | 45.166 |
-| Madison Kocian | 15.933 |
-| Gabby Douglas | 15.766 |
-+------------------+----------------+
-~~~
-
-The athlete with the highest combined score of 61.833 is Simone Biles.
diff --git a/src/current/_includes/v22.1/computed-columns/simple.md b/src/current/_includes/v22.1/computed-columns/simple.md
deleted file mode 100644
index 24a86a59481..00000000000
--- a/src/current/_includes/v22.1/computed-columns/simple.md
+++ /dev/null
@@ -1,40 +0,0 @@
-In this example, let's create a simple table with a computed column:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE users (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- city STRING,
- first_name STRING,
- last_name STRING,
- full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED,
- address STRING,
- credit_card STRING,
- dl STRING UNIQUE CHECK (LENGTH(dl) < 8)
-);
-~~~
-
-Then, insert a few rows of data:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users (first_name, last_name) VALUES
- ('Lola', 'McDog'),
- ('Carl', 'Kimball'),
- ('Ernie', 'Narayan');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM users;
-~~~
-~~~
- id | city | first_name | last_name | full_name | address | credit_card | dl
-+--------------------------------------+------+------------+-----------+---------------+---------+-------------+------+
- 5740da29-cc0c-47af-921c-b275d21d4c76 | NULL | Ernie | Narayan | Ernie Narayan | NULL | NULL | NULL
- e7e0b748-9194-4d71-9343-cd65218848f0 | NULL | Lola | McDog | Lola McDog | NULL | NULL | NULL
- f00e4715-8ca7-4d5a-8de5-ef1d5d8092f3 | NULL | Carl | Kimball | Carl Kimball | NULL | NULL | NULL
-(3 rows)
-~~~
-
-The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html).
diff --git a/src/current/_includes/v22.1/computed-columns/virtual.md b/src/current/_includes/v22.1/computed-columns/virtual.md
deleted file mode 100644
index 7d873440328..00000000000
--- a/src/current/_includes/v22.1/computed-columns/virtual.md
+++ /dev/null
@@ -1,41 +0,0 @@
-In this example, create a table with a `JSONB` column and virtual computed columns:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE student_profiles (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- profile JSONB,
- full_name STRING AS (concat_ws(' ',profile->>'firstName', profile->>'lastName')) VIRTUAL,
- birthday TIMESTAMP AS (parse_timestamp(profile->>'birthdate')) VIRTUAL
-);
-~~~
-
-Then, insert a few rows of data:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> INSERT INTO student_profiles (profile) VALUES
- ('{"id": "d78236", "firstName": "Arthur", "lastName": "Read", "birthdate": "2010-01-25", "school": "PVPHS", "credits": 120, "sports": "none"}'),
- ('{"firstName": "Buster", "lastName": "Bunny", "birthdate": "2011-11-07", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'),
- ('{"firstName": "Ernie", "lastName": "Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM student_profiles;
-~~~
-~~~
- id | profile | full_name | birthday
----------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+---------------+----------------------
- 0e420282-105d-473b-83e2-3b082e7033e4 | {"birthdate": "2011-11-07", "clubs": "MUN", "credits": 67, "firstName": "Buster", "id": "f98112", "lastName": "Bunny", "school": "THS"} | Buster Bunny | 2011-11-07 00:00:00
- 6e9b77cd-ec67-41ae-b346-7b3d89902c72 | {"birthdate": "2010-01-25", "credits": 120, "firstName": "Arthur", "id": "d78236", "lastName": "Read", "school": "PVPHS", "sports": "none"} | Arthur Read | 2010-01-25 00:00:00
- f74b21e3-dc1e-49b7-a648-3c9b9024a70f | {"clubs": "Chess", "firstName": "Ernie", "id": "t63512", "lastName": "Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} | Ernie Narayan | NULL
-(3 rows)
-
-
-Time: 2ms total (execution 2ms / network 0ms)
-~~~
-
-The virtual column `full_name` is computed as a field from the `profile` column's data. The first name and last name are concatenated and separated by a single whitespace character using the [`concat_ws` string function](functions-and-operators.html#string-and-byte-functions).
-
-The virtual column `birthday` is parsed as a `TIMESTAMP` value from the `profile` column's `birthdate` string value. The [`parse_timestamp` function](functions-and-operators.html) is used to parse strings in `TIMESTAMP` format.
diff --git a/src/current/_includes/v22.1/connect/connection-url.md b/src/current/_includes/v22.1/connect/connection-url.md
deleted file mode 100644
index ae994bb3047..00000000000
--- a/src/current/_includes/v22.1/connect/connection-url.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-Set a `DATABASE_URL` environment variable to your connection string.
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-export DATABASE_URL="{connection string}"
-~~~
-
-
-
-
-Set a `DATABASE_URL` environment variable to your connection string.
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$env:DATABASE_URL = "{connection string}"
-~~~
-
-
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/connect/core-note.md b/src/current/_includes/v22.1/connect/core-note.md
deleted file mode 100644
index 7b701cafb80..00000000000
--- a/src/current/_includes/v22.1/connect/core-note.md
+++ /dev/null
@@ -1,7 +0,0 @@
-{{site.data.alerts.callout_info}}
-The connection information shown on this page uses [client certificate and key authentication]({% link {{ page.version.version }}/authentication.md %}#client-authentication) to connect to a secure, CockroachDB {{ site.data.products.core }} cluster.
-
-To connect to a CockroachDB {{ site.data.products.core }} cluster with client certificate and key authentication, you must first [generate server and client certificates]({% link {{ page.version.version }}/authentication.md %}#using-digital-certificates-with-cockroachdb).
-
-For instructions on starting a secure cluster, see [Start a Local Cluster (Secure)]({% link {{ page.version.version }}/secure-a-cluster.md %}).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/connect/jdbc-connection-url.md b/src/current/_includes/v22.1/connect/jdbc-connection-url.md
deleted file mode 100644
index c055a390b4e..00000000000
--- a/src/current/_includes/v22.1/connect/jdbc-connection-url.md
+++ /dev/null
@@ -1,19 +0,0 @@
-Set a `JDBC_DATABASE_URL` environment variable to your JDBC connection string.
-
-
diff --git a/src/current/_includes/v22.1/core-note.md b/src/current/_includes/v22.1/core-note.md
deleted file mode 100644
index 7b701cafb80..00000000000
--- a/src/current/_includes/v22.1/core-note.md
+++ /dev/null
@@ -1,7 +0,0 @@
-{{site.data.alerts.callout_info}}
-The connection information shown on this page uses [client certificate and key authentication]({% link {{ page.version.version }}/authentication.md %}#client-authentication) to connect to a secure, CockroachDB {{ site.data.products.core }} cluster.
-
-To connect to a CockroachDB {{ site.data.products.core }} cluster with client certificate and key authentication, you must first [generate server and client certificates]({% link {{ page.version.version }}/authentication.md %}#using-digital-certificates-with-cockroachdb).
-
-For instructions on starting a secure cluster, see [Start a Local Cluster (Secure)]({% link {{ page.version.version }}/secure-a-cluster.md %}).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/dedicated-pci-compliance.md b/src/current/_includes/v22.1/dedicated-pci-compliance.md
deleted file mode 100644
index 97fa54068c7..00000000000
--- a/src/current/_includes/v22.1/dedicated-pci-compliance.md
+++ /dev/null
@@ -1,7 +0,0 @@
-{{site.data.alerts.callout_info}}
-CockroachDB {{ site.data.products.dedicated }} clusters comply with the Payment Card Industry Data Security Standard (PCI DSS). Compliance is certified by a PCI Qualified Security Assessor (QSA).
-
-To achieve compliance with PCI DSS on a CockroachDB {{ site.data.products.dedicated }} cluster, you must ensure that any information related to payments or other personally-identifiable information (PII) is encrypted, tokenized, or masked before being written to CockroachDB. You can implement this data protection from within the customer application or through a third-party intermediary solution such as [Satori](https://satoricyber.com/).
-
-To learn more about achieving PCI DSS compliance with CockroachDB {{ site.data.products.dedicated }}, contact your Cockroach Labs account team.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/demo_movr.md b/src/current/_includes/v22.1/demo_movr.md
deleted file mode 100644
index cde6c211213..00000000000
--- a/src/current/_includes/v22.1/demo_movr.md
+++ /dev/null
@@ -1,10 +0,0 @@
-Start the [MovR database](movr.html) on a 3-node CockroachDB demo cluster with a larger data set.
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-cockroach demo movr --num-histories 250000 --num-promo-codes 250000 --num-rides 125000 --num-users 12500 --num-vehicles 3750 --nodes 3
-~~~
-
-{% comment %}
-This is a test
-{% endcomment %}
diff --git a/src/current/_includes/v22.1/faq/auto-generate-unique-ids.html b/src/current/_includes/v22.1/faq/auto-generate-unique-ids.html
deleted file mode 100644
index ee56e21b7e0..00000000000
--- a/src/current/_includes/v22.1/faq/auto-generate-unique-ids.html
+++ /dev/null
@@ -1,109 +0,0 @@
-To auto-generate unique row identifiers, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html):
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE users (
- id UUID NOT NULL DEFAULT gen_random_uuid(),
- city STRING NOT NULL,
- name STRING NULL,
- address STRING NULL,
- credit_card STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- FAMILY "primary" (id, city, name, address, credit_card)
-);
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users (name, city) VALUES ('Petee', 'new york'), ('Eric', 'seattle'), ('Dan', 'seattle');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM users;
-~~~
-
-~~~
- id | city | name | address | credit_card
-+--------------------------------------+----------+-------+---------+-------------+
- cf8ee4e2-cd74-449a-b6e6-a0fb2017baa4 | new york | Petee | NULL | NULL
- 2382564e-702f-42d9-a139-b6df535ae00a | seattle | Eric | NULL | NULL
- 7d27e40b-263a-4891-b29b-d59135e55650 | seattle | Dan | NULL | NULL
-(3 rows)
-~~~
-
-Alternatively, you can use the [`BYTES`](bytes.html) column with the `uuid_v4()` function as the default value instead:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE users2 (
- id BYTES DEFAULT uuid_v4(),
- city STRING NOT NULL,
- name STRING NULL,
- address STRING NULL,
- credit_card STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- FAMILY "primary" (id, city, name, address, credit_card)
-);
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users2 (name, city) VALUES ('Anna', 'new york'), ('Jonah', 'seattle'), ('Terry', 'chicago');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM users;
-~~~
-
-~~~
- id | city | name | address | credit_card
-+------------------------------------------------+----------+-------+---------+-------------+
- 4\244\277\323/\261M\007\213\275*\0060\346\025z | chicago | Terry | NULL | NULL
- \273*t=u.F\010\274f/}\313\332\373a | new york | Anna | NULL | NULL
- \004\\\364nP\024L)\252\364\222r$\274O0 | seattle | Jonah | NULL | NULL
-(3 rows)
-~~~
-
-In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 512 MiB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load.
-
-This approach has the disadvantage of creating a primary key that may not be useful in a query directly, which can require a join with another table or a secondary index.
-
-If it is important for generated IDs to be stored in the same key-value range, you can use an [integer type](int.html) with the `unique_rowid()` [function](functions-and-operators.html#id-generation-functions) as the default value, either explicitly or via the [`SERIAL` pseudo-type](serial.html):
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE users3 (
- id INT DEFAULT unique_rowid(),
- city STRING NOT NULL,
- name STRING NULL,
- address STRING NULL,
- credit_card STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- FAMILY "primary" (id, city, name, address, credit_card)
-);
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users3 (name, city) VALUES ('Blake', 'chicago'), ('Hannah', 'seattle'), ('Bobby', 'seattle');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM users3;
-~~~
-
-~~~
- id | city | name | address | credit_card
-+--------------------+---------+--------+---------+-------------+
- 469048192112197633 | chicago | Blake | NULL | NULL
- 469048192112263169 | seattle | Hannah | NULL | NULL
- 469048192112295937 | seattle | Bobby | NULL | NULL
-(3 rows)
-~~~
-
-Upon insert or upsert, the `unique_rowid()` function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second. Also, there can be gaps and the order is not completely guaranteed.
-
-For further background on UUIDs, see [What is a UUID, and Why Should You Care?](https://www.cockroachlabs.com/blog/what-is-a-uuid/).
diff --git a/src/current/_includes/v22.1/faq/clock-synchronization-effects.md b/src/current/_includes/v22.1/faq/clock-synchronization-effects.md
deleted file mode 100644
index 8e749ba39c7..00000000000
--- a/src/current/_includes/v22.1/faq/clock-synchronization-effects.md
+++ /dev/null
@@ -1,27 +0,0 @@
-CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed, it spontaneously shuts down. This offset defaults to 500ms but can be changed via the [`--max-offset`](cockroach-start.html#flags-max-offset) flag when starting each node.
-
-While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node.
-
-In very rare cases, CockroachDB can momentarily run with a stale clock. This can happen when using vMotion, which can suspend a VM running CockroachDB, migrate it to different hardware, and resume it. This will cause CockroachDB to be out of sync for a short period before it jumps to the correct time. During this window, it would be possible for a client to read stale data and write data derived from stale reads. By enabling the `server.clock.forward_jump_check_enabled` [cluster setting](cluster-settings.html), you can be alerted when the CockroachDB clock jumps forward, indicating it had been running with a stale clock. To protect against this on vMotion, however, use the [`--clock-device`](cockroach-start.html#general) flag to specify a [PTP hardware clock](https://www.kernel.org/doc/html/latest/driver-api/ptp.html) for CockroachDB to use when querying the current time. When doing so, you should not enable `server.clock.forward_jump_check_enabled` because forward jumps will be expected and harmless. For more information on how `--clock-device` interacts with vMotion, see [this blog post](https://core.vmware.com/blog/cockroachdb-vmotion-support-vsphere-7-using-precise-timekeeping).
-
-### Considerations
-
-When setting up clock synchronization:
-
-- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing).
-- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should.
-- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.
-- Do not run more than one clock sync service on VMs where `cockroach` is running.
-- {% include {{ page.version.version }}/misc/multiregion-max-offset.md %}
-
-### Tutorials
-
-For guidance on synchronizing clocks, see the tutorial for your deployment environment:
-
-Environment | Featured Approach
-------------|---------------------
-[On-Premises](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service.
-[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service.
-[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service.
-[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service.
-[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service.
diff --git a/src/current/_includes/v22.1/faq/clock-synchronization-monitoring.html b/src/current/_includes/v22.1/faq/clock-synchronization-monitoring.html
deleted file mode 100644
index 7fb82e4d188..00000000000
--- a/src/current/_includes/v22.1/faq/clock-synchronization-monitoring.html
+++ /dev/null
@@ -1,8 +0,0 @@
-As explained in more detail [in our monitoring documentation](monitoring-and-alerting.html#prometheus-endpoint), each CockroachDB node exports a wide variety of metrics at `http://:/_status/vars` in the format used by the popular Prometheus timeseries database. Two of these metrics export how close each node's clock is to the clock of all other nodes:
-
-Metric | Definition
--------|-----------
-`clock_offset_meannanos` | The mean difference between the node's clock and other nodes' clocks in nanoseconds
-`clock_offset_stddevnanos` | The standard deviation of the difference between the node's clock and other nodes' clocks in nanoseconds
-
-As described in [the above answer](#what-happens-when-node-clocks-are-not-properly-synchronized), a node will shut down if the mean offset of its clock from the other nodes' clocks exceeds 80% of the maximum offset allowed. It's recommended to monitor the `clock_offset_meannanos` metric and alert if it's approaching the 80% threshold of your cluster's configured max offset.
diff --git a/src/current/_includes/v22.1/faq/differences-between-numberings.md b/src/current/_includes/v22.1/faq/differences-between-numberings.md
deleted file mode 100644
index 80f7fe26d50..00000000000
--- a/src/current/_includes/v22.1/faq/differences-between-numberings.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-| Property | UUID generated with `uuid_v4()` | INT generated with `unique_rowid()` | Sequences |
-|--------------------------------------|-----------------------------------------|-----------------------------------------------|--------------------------------|
-| Size | 16 bytes | 8 bytes | 1 to 8 bytes |
-| Ordering properties | Unordered | Highly time-ordered | Highly time-ordered |
-| Performance cost at generation | Small, scalable | Small, scalable | Variable, can cause [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) |
-| Value distribution | Uniformly distributed (128 bits) | Contains time and space (node ID) components | Dense, small values |
-| Data locality | Maximally distributed | Values generated close in time are co-located | Highly local |
-| `INSERT` latency when used as key | Small, insensitive to concurrency | Small, but increases with concurrent INSERTs | Higher |
-| `INSERT` throughput when used as key | Highest | Limited by max throughput on 1 node | Limited by max throughput on 1 node |
-| Read throughput when used as key | Highest (maximal parallelism) | Limited | Limited |
diff --git a/src/current/_includes/v22.1/faq/sequential-numbers.md b/src/current/_includes/v22.1/faq/sequential-numbers.md
deleted file mode 100644
index 0290c042060..00000000000
--- a/src/current/_includes/v22.1/faq/sequential-numbers.md
+++ /dev/null
@@ -1,8 +0,0 @@
-Sequential numbers can be generated in CockroachDB using the `unique_rowid()` built-in function or using [SQL sequences](create-sequence.html). However, note the following considerations:
-
-- Unless you need roughly-ordered numbers, use [`UUID`](uuid.html) values instead. See the [previous
-FAQ](#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for details.
-- [Sequences](create-sequence.html) produce **unique** values. However, not all values are guaranteed to be produced (e.g., when a transaction is canceled after it consumes a value) and the values may be slightly reordered (e.g., when a transaction that
-consumes a lower sequence number commits after a transaction that consumes a higher number).
-- For maximum performance, avoid using sequences or `unique_rowid()` to generate row IDs or indexed columns. Values generated in these ways are logically close to each other and can cause [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) on a few data ranges during inserts. Instead, prefer [`UUID`](uuid.html) identifiers.
-- {% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %}
diff --git a/src/current/_includes/v22.1/faq/sequential-transactions.md b/src/current/_includes/v22.1/faq/sequential-transactions.md
deleted file mode 100644
index 684f2ce5d2a..00000000000
--- a/src/current/_includes/v22.1/faq/sequential-transactions.md
+++ /dev/null
@@ -1,19 +0,0 @@
-Most use cases that ask for a strong time-based write ordering can be solved with other, more distribution-friendly
-solutions instead. For example, CockroachDB's [time travel queries (`AS OF SYSTEM
-TIME`)](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) support the following:
-
-- Paginating through all the changes to a table or dataset
-- Determining the order of changes to data over time
-- Determining the state of data at some point in the past
-- Determining the changes to data between two points of time
-
-Consider also that the values generated by `unique_rowid()`, described in the previous FAQ entries, also provide an approximate time ordering.
-
-However, if your application absolutely requires strong time-based write ordering, it is possible to create a strictly monotonic counter in CockroachDB that increases over time as follows:
-
-- Initially: `CREATE TABLE cnt(val INT PRIMARY KEY); INSERT INTO cnt(val) VALUES(1);`
-- In each transaction: `INSERT INTO cnt(val) SELECT max(val)+1 FROM cnt RETURNING val;`
-
-This will cause [`INSERT`](insert.html) transactions to conflict with each other and effectively force the transactions to commit one at a time throughout the cluster, which in turn guarantees the values generated in this way are strictly increasing over time without gaps. The caveat is that performance is severely limited as a result.
-
-If you find yourself interested in this problem, please [contact us](support-resources.html) and describe your situation. We would be glad to help you find alternative solutions and possibly extend CockroachDB to better match your needs.
diff --git a/src/current/_includes/v22.1/faq/simulate-key-value-store.html b/src/current/_includes/v22.1/faq/simulate-key-value-store.html
deleted file mode 100644
index 4772fa5358c..00000000000
--- a/src/current/_includes/v22.1/faq/simulate-key-value-store.html
+++ /dev/null
@@ -1,13 +0,0 @@
-CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key:
-
-~~~ sql
-> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES);
-~~~
-
-When such a "simple" table has no indexes or foreign keys, [`INSERT`](insert.html)/[`UPSERT`](upsert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html) statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following `UPSERT` to add or replace a row in the table would translate into a single key-value Put operation:
-
-~~~ sql
-> UPSERT INTO kv VALUES (1, b'hello')
-~~~
-
-This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises.
diff --git a/src/current/_includes/v22.1/faq/what-is-crdb.md b/src/current/_includes/v22.1/faq/what-is-crdb.md
deleted file mode 100644
index 28857ed61fa..00000000000
--- a/src/current/_includes/v22.1/faq/what-is-crdb.md
+++ /dev/null
@@ -1,7 +0,0 @@
-CockroachDB is a [distributed SQL](https://www.cockroachlabs.com/blog/what-is-distributed-sql/) database built on a transactional and strongly-consistent key-value store. It **scales** horizontally; **survives** disk, machine, rack, and even datacenter failures with minimal latency disruption and no manual intervention; supports **strongly-consistent** ACID transactions; and provides a familiar **SQL** API for structuring, manipulating, and querying data.
-
-CockroachDB is inspired by Google's [Spanner](http://research.google.com/archive/spanner.html) and [F1](http://research.google.com/pubs/pub38125.html) technologies, and the [source code](https://github.com/cockroachdb/cockroach) is freely available.
-
-{{site.data.alerts.callout_success}}
-For a deeper dive into CockroachDB's capabilities and how it fits into the database landscape, take the free [**Intro to Distributed SQL and CockroachDB**](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-distributed-sql-and-cockroachdb+self-paced/about) course on Cockroach University.
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/filter-tabs/crdb-kubernetes.md b/src/current/_includes/v22.1/filter-tabs/crdb-kubernetes.md
deleted file mode 100644
index db7f18ff324..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/crdb-kubernetes.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Secure;Insecure" %}
-{% assign html_page_filenames = "orchestrate-a-local-cluster-with-kubernetes.html;orchestrate-a-local-cluster-with-kubernetes-insecure.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/crdb-single-kubernetes.md b/src/current/_includes/v22.1/filter-tabs/crdb-single-kubernetes.md
deleted file mode 100644
index 409bdc1855c..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/crdb-single-kubernetes.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Secure;Insecure" %}
-{% assign html_page_filenames = "deploy-cockroachdb-with-kubernetes.html;deploy-cockroachdb-with-kubernetes-insecure.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/crud-go.md b/src/current/_includes/v22.1/filter-tabs/crud-go.md
deleted file mode 100644
index a69d0e4435c..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/crud-go.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Use pgx;Use GORM;Use lib/pq;Use upper/db" %}
-{% assign html_page_filenames = "build-a-go-app-with-cockroachdb.html;build-a-go-app-with-cockroachdb-gorm.html;build-a-go-app-with-cockroachdb-pq.html;build-a-go-app-with-cockroachdb-upperdb.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/crud-java.md b/src/current/_includes/v22.1/filter-tabs/crud-java.md
deleted file mode 100644
index 5cbdf749e09..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/crud-java.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Use JDBC;Use Hibernate;Use jOOQ;Use MyBatis-Spring" %}
-{% assign html_page_filenames = "build-a-java-app-with-cockroachdb.html;build-a-java-app-with-cockroachdb-hibernate.html;build-a-java-app-with-cockroachdb-jooq.html;build-a-spring-app-with-cockroachdb-mybatis.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/crud-js.md b/src/current/_includes/v22.1/filter-tabs/crud-js.md
deleted file mode 100644
index bb319ed88c1..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/crud-js.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Use node-postgres;Use Sequelize;Use Knex.js;Use Prisma;Use TypeORM" %}
-{% assign html_page_filenames = "build-a-nodejs-app-with-cockroachdb.html;build-a-nodejs-app-with-cockroachdb-sequelize.html;build-a-nodejs-app-with-cockroachdb-knexjs.html;build-a-nodejs-app-with-cockroachdb-prisma.html;build-a-typescript-app-with-cockroachdb.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/crud-python.md b/src/current/_includes/v22.1/filter-tabs/crud-python.md
deleted file mode 100644
index cb4905591f0..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/crud-python.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Use psycopg3;Use psycopg2;Use SQLAlchemy;Use Django;Use peewee" %}
-{% assign html_page_filenames = "build-a-python-app-with-cockroachdb-psycopg3.html;build-a-python-app-with-cockroachdb.html;build-a-python-app-with-cockroachdb-sqlalchemy.html;build-a-python-app-with-cockroachdb-django.html;https://docs.peewee-orm.com/en/latest/peewee/playhouse.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/crud-ruby.md b/src/current/_includes/v22.1/filter-tabs/crud-ruby.md
deleted file mode 100644
index 5fc13aa697b..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/crud-ruby.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Use pg;Use ActiveRecord" %}
-{% assign html_page_filenames = "build-a-ruby-app-with-cockroachdb.html;build-a-ruby-app-with-cockroachdb-activerecord.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/crud-spring.md b/src/current/_includes/v22.1/filter-tabs/crud-spring.md
deleted file mode 100644
index bd4f66f19a7..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/crud-spring.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Use JDBC;Use JPA" %}
-{% assign html_page_filenames = "build-a-spring-app-with-cockroachdb-jdbc.html;build-a-spring-app-with-cockroachdb-jpa.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-aws.md b/src/current/_includes/v22.1/filter-tabs/deploy-crdb-aws.md
deleted file mode 100644
index 706e5d85b8f..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-aws.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Secure;Insecure" %}
-{% assign html_page_filenames = "deploy-cockroachdb-on-aws.html;deploy-cockroachdb-on-aws-insecure.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-do.md b/src/current/_includes/v22.1/filter-tabs/deploy-crdb-do.md
deleted file mode 100644
index 02e44afee30..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-do.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Secure;Insecure" %}
-{% assign html_page_filenames = "deploy-cockroachdb-on-digital-ocean.html;deploy-cockroachdb-on-digital-ocean-insecure.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-gce.md b/src/current/_includes/v22.1/filter-tabs/deploy-crdb-gce.md
deleted file mode 100644
index 5799dfec9f0..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-gce.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Secure;Insecure" %}
-{% assign html_page_filenames = "deploy-cockroachdb-on-google-cloud-platform.html;deploy-cockroachdb-on-google-cloud-platform-insecure.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-ma.md b/src/current/_includes/v22.1/filter-tabs/deploy-crdb-ma.md
deleted file mode 100644
index 3f1162b426c..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-ma.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Secure;Insecure" %}
-{% assign html_page_filenames = "deploy-cockroachdb-on-microsoft-azure.html;deploy-cockroachdb-on-microsoft-azure-insecure.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-op.md b/src/current/_includes/v22.1/filter-tabs/deploy-crdb-op.md
deleted file mode 100644
index fdf35c61162..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/deploy-crdb-op.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Secure;Insecure" %}
-{% assign html_page_filenames = "deploy-cockroachdb-on-premises.html;deploy-cockroachdb-on-premises-insecure.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/perf-bench-tpc-c.md b/src/current/_includes/v22.1/filter-tabs/perf-bench-tpc-c.md
deleted file mode 100644
index 1394f916add..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/perf-bench-tpc-c.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Local;Local (Multi-Region);Small;Medium;Large" %}
-{% assign html_page_filenames = "performance-benchmarking-with-tpcc-local.html;performance-benchmarking-with-tpcc-local-multiregion.html;performance-benchmarking-with-tpcc-small.html;performance-benchmarking-with-tpcc-medium.html;performance-benchmarking-with-tpcc-large.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/security-cert.md b/src/current/_includes/v22.1/filter-tabs/security-cert.md
deleted file mode 100644
index 0832e618021..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/security-cert.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Use cockroach cert;Use OpenSSL;Use custom CA" %}
-{% assign html_page_filenames = "cockroach-cert.html;create-security-certificates-openssl.html;create-security-certificates-custom-ca.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/filter-tabs/start-a-cluster.md b/src/current/_includes/v22.1/filter-tabs/start-a-cluster.md
deleted file mode 100644
index 92a688078cb..00000000000
--- a/src/current/_includes/v22.1/filter-tabs/start-a-cluster.md
+++ /dev/null
@@ -1,4 +0,0 @@
-{% assign tab_names_html = "Secure;Insecure" %}
-{% assign html_page_filenames = "secure-a-cluster.html;start-a-local-cluster.html" %}
-
-{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %}
diff --git a/src/current/_includes/v22.1/import-table-deprecate.md b/src/current/_includes/v22.1/import-table-deprecate.md
deleted file mode 100644
index a7a21c87f7e..00000000000
--- a/src/current/_includes/v22.1/import-table-deprecate.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-As of v22.1, certain `IMPORT TABLE` statements that defined the table schema inline are **not** supported. See [Import — Considerations](import.html#considerations) for more details. To import data into a new table, use [`CREATE TABLE`](create-table.html) followed by [`IMPORT INTO`](import-into.html). For an example, read [Import into a new table from a CSV file](import-into.html#import-into-a-new-table-from-a-csv-file).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/jdbc-connection-url.md b/src/current/_includes/v22.1/jdbc-connection-url.md
deleted file mode 100644
index c055a390b4e..00000000000
--- a/src/current/_includes/v22.1/jdbc-connection-url.md
+++ /dev/null
@@ -1,19 +0,0 @@
-Set a `JDBC_DATABASE_URL` environment variable to your JDBC connection string.
-
-
diff --git a/src/current/_includes/v22.1/json/json-sample.go b/src/current/_includes/v22.1/json/json-sample.go
deleted file mode 100644
index d5953a71ee2..00000000000
--- a/src/current/_includes/v22.1/json/json-sample.go
+++ /dev/null
@@ -1,79 +0,0 @@
-package main
-
-import (
- "database/sql"
- "fmt"
- "io/ioutil"
- "net/http"
- "time"
-
- _ "github.com/lib/pq"
-)
-
-func main() {
- db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257")
- if err != nil {
- panic(err)
- }
-
- // The Reddit API wants us to tell it where to start from. The first request
- // we just say "null" to say "from the start", subsequent requests will use
- // the value received from the last call.
- after := "null"
-
- for i := 0; i < 41; i++ {
- after, err = makeReq(db, after)
- if err != nil {
- panic(err)
- }
- // Reddit limits to 30 requests per minute, so do not do any more than that.
- time.Sleep(2 * time.Second)
- }
-}
-
-func makeReq(db *sql.DB, after string) (string, error) {
- // First, make a request to reddit using the appropriate "after" string.
- client := &http.Client{}
- req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil)
-
- req.Header.Add("User-Agent", `Go`)
-
- resp, err := client.Do(req)
- if err != nil {
- return "", err
- }
-
- res, err := ioutil.ReadAll(resp.Body)
- if err != nil {
- return "", err
- }
-
- // We've gotten back our JSON from reddit, we can use a couple SQL tricks to
- // accomplish multiple things at once.
- // The JSON reddit returns looks like this:
- // {
- // "data": {
- // "children": [ ... ]
- // },
- // "after": ...
- // }
- // We structure our query so that we extract the `children` field, and then
- // expand that and insert each individual element into the database as a
- // separate row. We then return the "after" field so we know how to make the
- // next request.
- r, err := db.Query(`
- INSERT INTO jsonb_test.programming (posts)
- SELECT json_array_elements($1->'data'->'children')
- RETURNING $1->'data'->'after'`,
- string(res))
- if err != nil {
- return "", err
- }
-
- // Since we did a RETURNING, we need to grab the result of our query.
- r.Next()
- var newAfter string
- r.Scan(&newAfter)
-
- return newAfter, nil
-}
diff --git a/src/current/_includes/v22.1/json/json-sample.py b/src/current/_includes/v22.1/json/json-sample.py
deleted file mode 100644
index 49e302613e0..00000000000
--- a/src/current/_includes/v22.1/json/json-sample.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import json
-import psycopg2
-import requests
-import time
-
-conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-# The Reddit API wants us to tell it where to start from. The first request
-# we just say "null" to say "from the start"; subsequent requests will use
-# the value received from the last call.
-url = "https://www.reddit.com/r/programming.json"
-after = {"after": "null"}
-
-for n in range(41):
- # First, make a request to reddit using the appropriate "after" string.
- req = requests.get(url, params=after, headers={"User-Agent": "Python"})
-
- # Decode the JSON and set "after" for the next request.
- resp = req.json()
- after = {"after": str(resp['data']['after'])}
-
- # Convert the JSON to a string to send to the database.
- data = json.dumps(resp)
-
- # The JSON reddit returns looks like this:
- # {
- # "data": {
- # "children": [ ... ]
- # },
- # "after": ...
- # }
- # We structure our query so that we extract the `children` field, and then
- # expand that and insert each individual element into the database as a
- # separate row.
- cur.execute("""INSERT INTO jsonb_test.programming (posts)
- SELECT json_array_elements(%s->'data'->'children')""", (data,))
-
- # Reddit limits to 30 requests per minute, so do not do any more than that.
- time.sleep(2)
-
-cur.close()
-conn.close()
diff --git a/src/current/_includes/v22.1/known-limitations/cdc.md b/src/current/_includes/v22.1/known-limitations/cdc.md
deleted file mode 100644
index 8083b4c61ff..00000000000
--- a/src/current/_includes/v22.1/known-limitations/cdc.md
+++ /dev/null
@@ -1,8 +0,0 @@
-- Changefeeds cannot be [backed up](backup.html) or [restored](restore.html). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73434)
-- Changefeed target options are limited to tables and [column families](changefeeds-on-tables-with-column-families.html). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73435)
-- Using a [cloud storage sink](changefeed-sinks.html#cloud-storage-sink) only works with `JSON` and emits [newline-delimited json](http://ndjson.org) files. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73432)
-- Webhook sinks only support HTTPS. Use the [`insecure_tls_skip_verify`](create-changefeed.html#tls-skip-verify) parameter when testing to disable certificate verification; however, this still requires HTTPS and certificates. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73431)
-- [Webhook sinks](changefeed-sinks.html#webhook-sink) and [Google Cloud Pub/Sub sinks](changefeed-sinks.html#google-cloud-pub-sub) only have support for emitting `JSON`. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73432)
-- There is no concurrency configurability for [webhook sinks](changefeed-sinks.html#webhook-sink). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73430)
-- Using the [`split_column_families`](create-changefeed.html#split-column-families) and [`resolved`](create-changefeed.html#resolved-option) options on the same changefeed will cause an error when using the following [sinks](changefeed-sinks.html): Kafka and Google Cloud Pub/Sub. Instead, use the individual `FAMILY` keyword to specify column families when creating a changefeed. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/79452)
-- There is no configuration for unordered messages for [Google Cloud Pub/Sub sinks](changefeed-sinks.html#google-cloud-pub-sub). You must specify the `region` parameter in the URI to maintain [ordering guarantees](changefeed-messages.html#ordering-guarantees). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/80884)
diff --git a/src/current/_includes/v22.1/known-limitations/copy-syntax.md b/src/current/_includes/v22.1/known-limitations/copy-syntax.md
deleted file mode 100644
index 36b57030e9b..00000000000
--- a/src/current/_includes/v22.1/known-limitations/copy-syntax.md
+++ /dev/null
@@ -1,13 +0,0 @@
-CockroachDB does not yet support the following `COPY` syntax:
-
-- `COPY ... TO`. To copy data from a CockroachDB cluster to a file, use an [`EXPORT`](export.html) statement.
-
- [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/41608)
-
-- Various unsupported `COPY` options (`FORMAT`, `FREEZE`, etc.)
-
- [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/41608)
-
-- `COPY ... FROM ... WHERE `
-
- [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/54580)
diff --git a/src/current/_includes/v22.1/known-limitations/drop-single-partition.md b/src/current/_includes/v22.1/known-limitations/drop-single-partition.md
deleted file mode 100644
index 3d8166fdc04..00000000000
--- a/src/current/_includes/v22.1/known-limitations/drop-single-partition.md
+++ /dev/null
@@ -1 +0,0 @@
-CockroachDB does not currently support dropping a single partition from a table. In order to remove partitions, you can [repartition]({% unless page.name == "partitioning.md" %}partitioning.html{% endunless %}#repartition-a-table) the table.
diff --git a/src/current/_includes/v22.1/known-limitations/drop-unique-index-from-create-table.md b/src/current/_includes/v22.1/known-limitations/drop-unique-index-from-create-table.md
deleted file mode 100644
index 698a24c24ef..00000000000
--- a/src/current/_includes/v22.1/known-limitations/drop-unique-index-from-create-table.md
+++ /dev/null
@@ -1 +0,0 @@
-[`UNIQUE` indexes](create-index.html) created as part of a [`CREATE TABLE`](create-table.html) statement cannot be removed without using [`CASCADE`]({% unless page.name == "drop-index.md" %}drop-index.html{% endunless %}#remove-an-index-and-dependent-objects-with-cascade). Unique indexes created with [`CREATE INDEX`](create-index.html) do not have this limitation.
diff --git a/src/current/_includes/v22.1/known-limitations/dropping-renaming-during-upgrade.md b/src/current/_includes/v22.1/known-limitations/dropping-renaming-during-upgrade.md
deleted file mode 100644
index 38f7f9ddd87..00000000000
--- a/src/current/_includes/v22.1/known-limitations/dropping-renaming-during-upgrade.md
+++ /dev/null
@@ -1,10 +0,0 @@
-When upgrading from v20.1.x to v20.2.0, as soon as any node of the cluster has run v20.2.0, it is important to avoid dropping, renaming, or truncating tables, views, sequences, or databases on the v20.1 nodes. This is true even in cases where nodes were upgraded to v20.2.0 and then rolled back to v20.1.
-
-In this case, avoid running the following operations against v20.1 nodes:
-
-- [`DROP TABLE`](drop-table.html), [`TRUNCATE TABLE`](truncate.html), [`RENAME TABLE`](rename-table.html)
-- [`DROP VIEW`](drop-view.html)
-- [`DROP SEQUENCE`](drop-sequence.html), [`RENAME SEQUENCE`](rename-sequence.html)
-- [`DROP DATABASE`](drop-database.html), [`RENAME DATABASE`](rename-database.html)
-
-Running any of these operations against v19.2 nodes will result in inconsistency between two internal tables, `system.namespace` and `system.namespace2`. This inconsistency will prevent you from being able to recreate the dropped or renamed objects; the returned error will be `ERROR: relation already exists`. In the case of a dropped or renamed database, [`SHOW DATABASES`](show-databases.html) will also return an error: `ERROR: internal error: "" is not a database`.
diff --git a/src/current/_includes/v22.1/known-limitations/import-high-disk-contention.md b/src/current/_includes/v22.1/known-limitations/import-high-disk-contention.md
deleted file mode 100644
index 0e016ecaac5..00000000000
--- a/src/current/_includes/v22.1/known-limitations/import-high-disk-contention.md
+++ /dev/null
@@ -1,6 +0,0 @@
-[`IMPORT`](import.html) can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting](cluster-settings.html) to a value below your max disk write speed. For example, to set it to 10MB/s, execute:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';
-~~~
diff --git a/src/current/_includes/v22.1/known-limitations/old-multi-col-stats.md b/src/current/_includes/v22.1/known-limitations/old-multi-col-stats.md
deleted file mode 100644
index 595be9c7209..00000000000
--- a/src/current/_includes/v22.1/known-limitations/old-multi-col-stats.md
+++ /dev/null
@@ -1,3 +0,0 @@
-When a column is dropped from a multi-column index, the {% if page.name == "cost-based-optimizer.md" %} optimizer {% else %} [optimizer](cost-based-optimizer.html) {% endif %} will not collect new statistics for the deleted column. However, the optimizer never deletes the old [multi-column statistics](create-statistics.html#create-statistics-on-multiple-columns). This can cause a buildup of statistics in `system.table_statistics` leading the optimizer to use stale statistics, which could result in sub-optimal plans. To workaround this issue and avoid these scenarios, explicitly [delete those statistics](create-statistics.html#delete-statistics) from the `system.table_statistics` table.
-
- [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67407)
diff --git a/src/current/_includes/v22.1/known-limitations/partitioning-with-placeholders.md b/src/current/_includes/v22.1/known-limitations/partitioning-with-placeholders.md
deleted file mode 100644
index b3c3345200d..00000000000
--- a/src/current/_includes/v22.1/known-limitations/partitioning-with-placeholders.md
+++ /dev/null
@@ -1 +0,0 @@
-When defining a [table partition](partitioning.html), either during table creation or table alteration, it is not possible to use placeholders in the `PARTITION BY` clause.
diff --git a/src/current/_includes/v22.1/known-limitations/restore-multiregion-match.md b/src/current/_includes/v22.1/known-limitations/restore-multiregion-match.md
deleted file mode 100644
index 6d0f6c989fc..00000000000
--- a/src/current/_includes/v22.1/known-limitations/restore-multiregion-match.md
+++ /dev/null
@@ -1,48 +0,0 @@
-[`REGIONAL BY TABLE`](multiregion-overview.html#regional-tables) and [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables) tables can be restored **only** if the regions of the backed-up table match those of the target database. All of the following must be true for `RESTORE` to be successful:
-
- * The [regions](multiregion-overview.html#database-regions) of the source database and the regions of the destination database have the same set of regions.
- * The regions were added to each of the databases in the same order.
- * The databases have the same [primary region](set-primary-region.html).
-
- The following example would be considered as having **mismatched** regions because the database regions were not added in the same order and the primary regions do not match.
-
- Running on the source database:
-
- ~~~ sql
- ALTER DATABASE source_database SET PRIMARY REGION "us-east1";
- ~~~
- ~~~ sql
- ALTER DATABASE source_database ADD region "us-west1";
- ~~~
-
- Running on the destination database:
-
- ~~~ sql
- ALTER DATABASE destination_database SET PRIMARY REGION "us-west1";
- ~~~
- ~~~ sql
- ALTER DATABASE destination_database ADD region "us-east1";
- ~~~
-
- In addition, the following scenario has mismatched regions between the databases since the regions were not added to the database in the same order.
-
- Running on the source database:
-
- ~~~ sql
- ALTER DATABASE source_database SET PRIMARY REGION "us-east1";
- ~~~
- ~~~ sql
- ALTER DATABASE source_database ADD region "us-west1";
- ~~~
-
- Running on the destination database:
-
- ~~~ sql
- ALTER DATABASE destination_database SET PRIMARY REGION "us-west1";
- ~~~
- ~~~ sql
- ALTER DATABASE destination_database ADD region "us-east1";
- ~~~
- ~~~ sql
- ALTER DATABASE destination_database SET PRIMARY REGION "us-east1";
- ~~~
diff --git a/src/current/_includes/v22.1/known-limitations/restore-tables-non-multi-reg.md b/src/current/_includes/v22.1/known-limitations/restore-tables-non-multi-reg.md
deleted file mode 100644
index 45ce8db1924..00000000000
--- a/src/current/_includes/v22.1/known-limitations/restore-tables-non-multi-reg.md
+++ /dev/null
@@ -1 +0,0 @@
-Restoring [`GLOBAL`](multiregion-overview.html#global-tables) and [`REGIONAL BY TABLE`](multiregion-overview.html#regional-tables) tables into a **non**-multi-region database is not supported. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/71502)
diff --git a/src/current/_includes/v22.1/known-limitations/row-level-ttl-limitations.md b/src/current/_includes/v22.1/known-limitations/row-level-ttl-limitations.md
deleted file mode 100644
index fd4db41985f..00000000000
--- a/src/current/_includes/v22.1/known-limitations/row-level-ttl-limitations.md
+++ /dev/null
@@ -1,10 +0,0 @@
-- You cannot use [foreign keys](foreign-key.html) to create references to or from a table that uses Row-Level TTL. [cockroachdb/cockroach#76407](https://github.com/cockroachdb/cockroach/issues/76407)
-- Any queries you run against tables with Row-Level TTL enabled do not filter out expired rows from the result set (this includes [`UPDATE`s](update.html) and [`DELETE`s](delete.html)). This feature may be added in a future release. For now, follow the instructions in [Filter out expired rows from a selection query](row-level-ttl.html#filter-out-expired-rows-from-a-selection-query).
-- The TTL cannot be customized based on the values of other columns in the row. [cockroachdb/cockroach#76916](https://github.com/cockroachdb/cockroach/issues/76916)
- - Because of the above limitation, adding TTL to large existing tables [can negatively affect performance](row-level-ttl.html#ttl-existing-table-performance-note), since a new column must be created and backfilled for every row. Creating a new table with a TTL is not affected by this limitation.
-- The queries executed by Row-Level TTL are not yet optimized for performance:
- - They do not use any indexes that may be available on the [`crdb_internal_expiration` column](row-level-ttl.html#crdb-internal-expiration).
- - They do not take into account [node localities](cockroach-start.html#locality).
- - All deletes are run on a single node, instead of being distributed.
- - For details, see [cockroachdb/cockroach#76914](https://github.com/cockroachdb/cockroach/issues/76914)
-- If you [override the TTL for a row by setting `crdb_internal_expiration` directly](row-level-ttl.html#set-the-row-level-ttl-for-an-individual-row), and the row is later updated (e.g., using an [`ON UPDATE` expression](create-table.html#on-update-expressions)), the TTL override is lost; it is reset to `now() + ttl_expire_after`.
diff --git a/src/current/_includes/v22.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md b/src/current/_includes/v22.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md
deleted file mode 100644
index 0c8be84fd54..00000000000
--- a/src/current/_includes/v22.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md
+++ /dev/null
@@ -1,60 +0,0 @@
-Schema change [DDL](https://en.wikipedia.org/wiki/Data_definition_language#ALTER_statement) statements that run inside a multi-statement transaction with non-DDL statements can fail at [`COMMIT`](commit-transaction.html) time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded.
-
-If such a failure occurs, CockroachDB will emit a CockroachDB-specific error code, `XXA00`, and the following error message:
-
-```
-transaction committed but schema change aborted with error:
-HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
-Manual inspection may be required to determine the actual state of the database.
-```
-
-{{site.data.alerts.callout_danger}}
-If you must execute schema change DDL statements inside a multi-statement transaction, we **strongly recommend** checking for this error code and handling it appropriately every time you execute such transactions.
-{{site.data.alerts.end}}
-
-This error will occur in various scenarios, including but not limited to:
-
-- Creating a unique index fails because values aren't unique.
-- The evaluation of a computed value fails.
-- Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column.
-
-To see an example of this error, start by creating the following table.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-CREATE TABLE T(x INT);
-INSERT INTO T(x) VALUES (1), (2), (3);
-~~~
-
-Then, enter the following multi-statement transaction, which will trigger the error.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-BEGIN;
-ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x);
-INSERT INTO T(x) VALUES (3);
-COMMIT;
-~~~
-
-~~~
-pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x"
-HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
-Manual inspection may be required to determine the actual state of the database.
-~~~
-
-In this example, the [`INSERT`](insert.html) statement committed, but the [`ALTER TABLE`](alter-table.html) statement adding a [`UNIQUE` constraint](unique.html) failed. We can verify this by looking at the data in table `t` and seeing that the additional non-unique value `3` was successfully inserted.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT * FROM t;
-~~~
-
-~~~
- x
-+---+
- 1
- 2
- 3
- 3
-(4 rows)
-~~~
diff --git a/src/current/_includes/v22.1/known-limitations/schema-changes-between-prepared-statements.md b/src/current/_includes/v22.1/known-limitations/schema-changes-between-prepared-statements.md
deleted file mode 100644
index 736fe99df61..00000000000
--- a/src/current/_includes/v22.1/known-limitations/schema-changes-between-prepared-statements.md
+++ /dev/null
@@ -1,33 +0,0 @@
-When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a `SELECT *` clause will result in an error:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-CREATE TABLE users (id INT PRIMARY KEY);
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-PREPARE prep1 AS SELECT * FROM users;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-ALTER TABLE users ADD COLUMN name STRING;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-INSERT INTO users VALUES (1, 'Max Roach');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-EXECUTE prep1;
-~~~
-
-~~~
-ERROR: cached plan must not change result type
-SQLSTATE: 0A000
-~~~
-
-It's therefore recommended to explicitly list result columns instead of using `SELECT *` in prepared statements, when possible.
diff --git a/src/current/_includes/v22.1/known-limitations/schema-changes-within-transactions.md b/src/current/_includes/v22.1/known-limitations/schema-changes-within-transactions.md
deleted file mode 100644
index b0a62d43e34..00000000000
--- a/src/current/_includes/v22.1/known-limitations/schema-changes-within-transactions.md
+++ /dev/null
@@ -1,9 +0,0 @@
-Within a single [transaction](transactions.html):
-
-- You can run schema changes inside the same transaction as a [`CREATE TABLE`](create-table.html) statement. For more information, see [Run schema changes inside a transaction with `CREATE TABLE`](online-schema-changes.html#run-schema-changes-inside-a-transaction-with-create-table). However, a `CREATE TABLE` statement containing [`FOREIGN KEY`](foreign-key.html) clauses cannot be followed by statements that reference the new table.
-- [Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed](#schema-change-ddl-statements-inside-a-multi-statement-transaction-can-fail-while-other-statements-succeed).
-- [`DROP COLUMN`](drop-column.html) can result in data loss if one of the other schema changes in the transaction fails or is canceled. To work around this, move the `DROP COLUMN` statement to its own explicit transaction or run it in a single statement outside the existing transaction.
-
-{{site.data.alerts.callout_info}}
-If a schema change within a transaction fails, manual intervention may be needed to determine which statement has failed. After determining which schema change(s) failed, you can then retry the schema change.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/known-limitations/set-transaction-no-rollback.md b/src/current/_includes/v22.1/known-limitations/set-transaction-no-rollback.md
deleted file mode 100644
index 4ab3661f4f7..00000000000
--- a/src/current/_includes/v22.1/known-limitations/set-transaction-no-rollback.md
+++ /dev/null
@@ -1,17 +0,0 @@
-{% if page.name == "set-vars.md" %} `SET` {% else %} [`SET`](set-vars.html) {% endif %} does not properly apply [`ROLLBACK`](rollback-transaction.html) within a transaction. For example, in the following transaction, showing the `TIME ZONE` [variable](set-vars.html#supported-variables) does not return `2` as expected after the rollback:
-
-~~~sql
-SET TIME ZONE +2;
-BEGIN;
-SET TIME ZONE +3;
-ROLLBACK;
-SHOW TIME ZONE;
-~~~
-
-~~~sql
-timezone
-------------
-3
-~~~
-
-[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/69396)
diff --git a/src/current/_includes/v22.1/known-limitations/show-backup-locality-incremental-location.md b/src/current/_includes/v22.1/known-limitations/show-backup-locality-incremental-location.md
deleted file mode 100644
index c19aa8d10b4..00000000000
--- a/src/current/_includes/v22.1/known-limitations/show-backup-locality-incremental-location.md
+++ /dev/null
@@ -1 +0,0 @@
-{% if page.name == "show-backup.md" %}`SHOW BACKUP`{% else %}[`SHOW BACKUP`](show-backup.html){% endif %} can display backups taken with the `incremental_location` option **or** for [locality-aware backups](take-and-restore-locality-aware-backups.html). It will not display backups for locality-aware backups taken with the `incremental_location` option. [Tracking GitHub issue](https://github.com/cockroachdb/cockroach/issues/82912).
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/known-limitations/single-col-stats-deletion.md b/src/current/_includes/v22.1/known-limitations/single-col-stats-deletion.md
deleted file mode 100644
index b8baa46c5d2..00000000000
--- a/src/current/_includes/v22.1/known-limitations/single-col-stats-deletion.md
+++ /dev/null
@@ -1,3 +0,0 @@
-[Single-column statistics](create-statistics.html#create-statistics-on-a-single-column) are not deleted when columns are dropped, which could cause minor performance issues.
-
- [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67407)
diff --git a/src/current/_includes/v22.1/known-limitations/sql-cursors.md b/src/current/_includes/v22.1/known-limitations/sql-cursors.md
deleted file mode 100644
index e204de9f74a..00000000000
--- a/src/current/_includes/v22.1/known-limitations/sql-cursors.md
+++ /dev/null
@@ -1,25 +0,0 @@
-CockroachDB implements SQL {% if page.name == "known-limitations.md" %} [cursor](cursors.html) {% else %} cursor {% endif %} support with the following limitations:
-
-- `DECLARE` only supports forward cursors. Reverse cursors created with `DECLARE SCROLL` are not supported. [cockroachdb/cockroach#77102](https://github.com/cockroachdb/cockroach/issues/77102)
-- `FETCH` supports forward, relative, and absolute variants, but only for forward cursors. [cockroachdb/cockroach#77102](https://github.com/cockroachdb/cockroach/issues/77102)
-- `BINARY CURSOR`, which returns data in the Postgres binary format, is not supported. [cockroachdb/cockroach#77099](https://github.com/cockroachdb/cockroach/issues/77099)
-- `MOVE`, which allows advancing the cursor without returning any rows, is not supported. [cockroachdb/cockroach#77100](https://github.com/cockroachdb/cockroach/issues/77100)
- - `WITH HOLD`, which allows keeping a cursor open for longer than a transaction by writing its results into a buffer, is accepted as valid syntax within a single transaction but is not supported. It acts as a no-op and does not actually perform the function of `WITH HOLD`, which is to make the cursor live outside its parent transaction. Instead, if you are using `WITH HOLD`, you will be forced to close that cursor within the transaction it was created in. [cockroachdb/cockroach#77101](https://github.com/cockroachdb/cockroach/issues/77101)
- - This syntax is accepted (but does not have any effect):
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- BEGIN;
- DECLARE test_cur CURSOR WITH HOLD FOR SELECT * FROM foo ORDER BY bar;
- CLOSE test_cur;
- COMMIT;
- ~~~
- - This syntax is not accepted, and will result in an error:
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- BEGIN;
- DECLARE test_cur CURSOR WITH HOLD FOR SELECT * FROM foo ORDER BY bar;
- COMMIT; -- This will fail with an error because CLOSE test_cur was not called inside the transaction.
- ~~~
-- Scrollable cursor (also known as reverse `FETCH`) is not supported.
-- [`SELECT ... FOR UPDATE`](select-for-update.html) with a cursor is not supported. [cockroachdb/cockroach#77103](https://github.com/cockroachdb/cockroach/issues/77103)
-- Respect for [`SAVEPOINT`s](savepoint.html) is not supported. Cursor definitions do not disappear properly if rolled back to a `SAVEPOINT` from before they were created. [cockroachdb/cockroach#77104](https://github.com/cockroachdb/cockroach/issues/77104)
diff --git a/src/current/_includes/v22.1/known-limitations/stats-refresh-upgrade.md b/src/current/_includes/v22.1/known-limitations/stats-refresh-upgrade.md
deleted file mode 100644
index f54a08b3754..00000000000
--- a/src/current/_includes/v22.1/known-limitations/stats-refresh-upgrade.md
+++ /dev/null
@@ -1,3 +0,0 @@
-The [automatic statistics refresher](cost-based-optimizer.html#control-statistics-refresh-rate) automatically checks whether it needs to refresh statistics for every table in the database upon startup of each node in the cluster. If statistics for a table have not been refreshed in a while, this will trigger collection of statistics for that table. If statistics have been refreshed recently, it will not force a refresh. As a result, the automatic statistics refresher does not necessarily perform a refresh of statistics after an [upgrade](upgrade-cockroach-version.html). This could cause a problem, for example, if the upgrade moves from a version without [histograms](cost-based-optimizer.html#control-histogram-collection) to a version with histograms. To refresh statistics manually, use [`CREATE STATISTICS`](create-statistics.html).
-
- [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/54816)
diff --git a/src/current/_includes/v22.1/known-limitations/userfile-upload-non-recursive.md b/src/current/_includes/v22.1/known-limitations/userfile-upload-non-recursive.md
deleted file mode 100644
index 19db5fde6a4..00000000000
--- a/src/current/_includes/v22.1/known-limitations/userfile-upload-non-recursive.md
+++ /dev/null
@@ -1 +0,0 @@
-- `cockroach userfile upload` does not not currently allow for recursive uploads from a directory. This feature will be present with the `--recursive` flag in future versions. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/pull/65307)
diff --git a/src/current/_includes/v22.1/metric-names-serverless.md b/src/current/_includes/v22.1/metric-names-serverless.md
deleted file mode 100644
index d26d1e892c3..00000000000
--- a/src/current/_includes/v22.1/metric-names-serverless.md
+++ /dev/null
@@ -1,235 +0,0 @@
-Name | Description
------|-----
-`addsstable.applications` | Number of SSTable ingestions applied (i.e. applied by Replicas)
-`addsstable.copies` | number of SSTable ingestions that required copying files during application
-`addsstable.proposals` | Number of SSTable ingestions proposed (i.e. sent to Raft by lease holders)
-`admission.wait_sum.kv-stores` | Total wait time in micros
-`admission.wait_sum.kv` | Total wait time in micros
-`admission.wait_sum.sql-kv-response` | Total wait time in micros
-`admission.wait_sum.sql-sql-response` | Total wait time in micros
-`capacity.available` | Available storage capacity
-`capacity.reserved` | Capacity reserved for snapshots
-`capacity.used` | Used storage capacity
-`capacity` | Total storage capacity
-`changefeed.emitted_messages` | Messages emitted by all feeds
-`changefeed.error_retries` | Total retryable errors encountered by all changefeeds
-`changefeed.failures` | Total number of changefeed jobs which have failed
-`changefeed.max_behind_nanos` | Largest commit-to-emit duration of any running feed
-`changefeed.running` | Number of currently running changefeeds, including sinkless
-`clock-offset.meannanos` | Mean clock offset with other nodes
-`clock-offset.stddevnanos` | Stddev clock offset with other nodes
-`distsender.batches.partial` | Number of partial batches processed after being divided on range boundaries
-`distsender.batches` | Number of batches processed
-`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered from replica-addressed RPCs
-`distsender.rpc.sent.local` | Number of replica-addressed RPCs sent through the local-server optimization
-`distsender.rpc.sent.nextreplicaerror` | Number of replica-addressed RPCs sent due to per-replica errors
-`distsender.rpc.sent` | Number of replica-addressed RPCs sent
-`exec.error` | Number of batch KV requests that failed to execute on this node. This count excludes transaction restart/abort errors. However, it will include other errors expected during normal operation, such as ConditionFailedError. This metric is thus not an indicator of KV health.
-`exec.latency` | Latency of batch KV requests (including errors) executed on this node. This measures requests already addressed to a single replica, from the moment at which they arrive at the internal gRPC endpoint to the moment at which the response (or an error) is returned. This latency includes in particular commit waits, conflict resolution and replication, and end-users can easily produce high measurements via long-running transactions that conflict with foreground traffic. This metric thus does not provide a good signal for understanding the health of the KV layer.
-`exec.success` | Number of batch KV requests executed successfully on this node. A request is considered to have executed 'successfully' if it either returns a result or a transaction restart/abort error.
-`gcbytesage` | Cumulative age of non-live data
-`gossip.bytes.received` | Number of received gossip bytes
-`gossip.bytes.sent` | Number of sent gossip bytes
-`gossip.connections.incoming` | Number of active incoming gossip connections
-`gossip.connections.outgoing` | Number of active outgoing gossip connections
-`gossip.connections.refused` | Number of refused incoming gossip connections
-`gossip.infos.received` | Number of received gossip Info objects
-`gossip.infos.sent` | Number of sent gossip Info objects
-`intentage` | Cumulative age of intents
-`intentbytes` | Number of bytes in intent KV pairs
-`intentcount` | Count of intent keys
-`jobs.changefeed.resume_retry_error` | Number of changefeed jobs which failed with a retriable error
-`keybytes` | Number of bytes taken up by keys
-`keycount` | Count of all keys
-`leases.epoch` | Number of replica leaseholders using epoch-based leases
-`leases.error` | Number of failed lease requests
-`leases.expiration` | Number of replica leaseholders using expiration-based leases
-`leases.success` | Number of successful lease requests
-`leases.transfers.error` | Number of failed lease transfers
-`leases.transfers.success` | Number of successful lease transfers
-`livebytes` | Number of bytes of live data (keys plus values)
-`livecount` | Count of live keys
-`liveness.epochincrements` | Number of times this node has incremented its liveness epoch
-`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node
-`liveness.heartbeatlatency` | Node liveness heartbeat latency
-`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node
-`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live)
-`queue.consistency.pending` | Number of pending replicas in the consistency checker queue
-`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue
-`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue
-`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue
-`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal
-`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal
-`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine
-`queue.gc.info.intentsconsidered` | Number of 'old' intents
-`queue.gc.info.intenttxns` | Number of associated distinct transactions
-`queue.gc.info.numkeysaffected` | Number of keys with GC'able data
-`queue.gc.info.pushtxn` | Number of attempted pushes
-`queue.gc.info.resolvesuccess` | Number of successful intent resolutions
-`queue.gc.info.resolvetotal` | Number of attempted intent resolutions
-`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns
-`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns
-`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns
-`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine
-`queue.gc.pending` | Number of pending replicas in the MVCC GC queue
-`queue.gc.process.failure` | Number of replicas which failed processing in the MVCC GC queue
-`queue.gc.process.success` | Number of replicas successfully processed by the MVCC GC queue
-`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the MVCC GC queue
-`queue.raftlog.pending` | Number of pending replicas in the Raft log queue
-`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue
-`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue
-`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue
-`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue
-`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue
-`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue
-`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue
-`queue.replicagc.pending` | Number of pending replicas in the replica GC queue
-`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue
-`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue
-`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue
-`queue.replicagc.removereplica` | Number of replica removals attempted by the replica GC queue
-`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue
-`queue.replicate.pending` | Number of pending replicas in the replicate queue
-`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue
-`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue
-`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue
-`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options
-`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue
-`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage)
-`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition)
-`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue
-`queue.split.pending` | Number of pending replicas in the split queue
-`queue.split.process.failure` | Number of replicas which failed processing in the split queue
-`queue.split.process.success` | Number of replicas successfully processed by the split queue
-`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue
-`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue
-`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue
-`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue
-`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue
-`raft.commandsapplied` | Count of Raft commands applied. This measurement is taken on the Raft apply loops of all Replicas (leaders and followers alike), meaning that it does not measure the number of Raft commands *proposed* (in the hypothetical extreme case, all Replicas may apply all commands through snapshots, thus not increasing this metric at all). Instead, it is a proxy for how much work is being done advancing the Replica state machines on this node.
-`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue. The queue is bounded in size, so instead of unbounded growth one would observe a ceiling value in the tens of thousands.
-`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced
-`raft.process.commandcommit.latency` | Latency histogram for applying a batch of Raft commands to the state machine. This metric is misnamed: it measures the latency for *applying* a batch of committed Raft commands to a Replica state machine. This requires only non-durable I/O (except for replication configuration changes). Note that a "batch" in this context is really a sub-batch of the batch received for application during raft ready handling. The 'raft.process.applycommitted.latency' histogram is likely more suitable in most cases, as it measures the total latency across all sub-batches (i.e. the sum of commandcommit.latency for a complete batch).
-`raft.process.logcommit.latency` | Latency histogram for committing Raft log entries to stable storage. This measures the latency of durably committing a group of newly received Raft entries as well as the HardState entry to disk. This excludes any data processing, i.e. we measure purely the commit latency of the resulting Engine write. Homogeneous bands of p50-p99 latencies (in the presence of regular Raft traffic), make it likely that the storage layer is healthy. Spikes in the latency bands can either hint at the presence of large sets of Raft entries being received, or at performance issues at the storage layer.
-`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick()
-`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working. This is the sum of the measurements passed to the raft.process.handleready.latency histogram.
-`raft.rcvd.app` | Number of MsgApp messages received by this store
-`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store
-`raft.rcvd.dropped` | Number of dropped incoming Raft messages
-`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store
-`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store
-`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store
-`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store
-`raft.rcvd.prop` | Number of MsgProp messages received by this store
-`raft.rcvd.snap` | Number of MsgSnap messages received by this store
-`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store
-`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store
-`raft.rcvd.vote` | Number of MsgVote messages received by this store
-`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store
-`raft.ticks` | Number of Raft ticks queued
-`raftlog.behind` | Number of Raft log entries followers on other stores are behind. This gauge provides a view of the aggregate number of log entries the Raft leaders on this node think the followers are behind. Since a raft leader may not always have a good estimate for this information for all of its followers, and since followers are expected to be behind (when they are not required as part of a quorum) *and* the aggregate thus scales like the count of such followers, it is difficult to meaningfully interpret this metric.
-`raftlog.truncated` | Number of Raft log entries truncated
-`range.adds` | Number of range additions
-`range.raftleadertransfers` | Number of raft leader transfers
-`range.removes` | Number of range removals
-`range.snapshots.generated` | Number of generated snapshots
-`range.splits` | Number of range splits
-`ranges.overreplicated` | Number of ranges with more live replicas than the replication target
-`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum
-`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target
-`ranges` | Number of ranges
-`rebalancing.writespersecond` | Number of keys written (i.e. applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions
-`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store
-`replicas.leaders` | Number of raft leaders
-`replicas.leaseholders` | Number of lease holders
-`replicas.quiescent` | Number of quiesced replicas
-`replicas.reserved` | Number of replicas reserved for snapshots
-`replicas` | Number of replicas
-`requests.backpressure.split` | Number of backpressured writes waiting on a Range split. A Range will backpressure (roughly) non-system traffic when the range is above the configured size until the range splits. When the rate of this metric is nonzero over extended periods of time, it should be investigated why splits are not occurring.
-`requests.slow.distsender` | Number of replica-bound RPCs currently stuck or retrying for a long time. Note that this is not a good signal for KV health. The remote side of the RPCs tracked here may experience contention, so an end user can easily cause values for this metric to be emitted by leaving a transaction open for a long time and contending with it using a second transaction.
-`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease. This gauge registering a nonzero value usually indicates range or replica unavailability, and should be investigated. In the common case, we also expect to see 'requests.slow.raft' to register a nonzero value, indicating that the lease requests are not getting a timely response from the replication layer.
-`requests.slow.raft` | Number of requests that have been stuck for a long time in the replication layer. An (evaluated) request has to pass through the replication layer, notably the quota pool and raft. If it fails to do so within a highly permissive duration, the gauge is incremented (and decremented again once the request is either applied or returns an error). A nonzero value indicates range or replica unavailability, and should be investigated.
-`rocksdb.block.cache.hits` | Count of block cache hits
-`rocksdb.block.cache.misses` | Count of block cache misses
-`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache
-`rocksdb.block.cache.usage` | Bytes used by the block cache
-`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked
-`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation
-`rocksdb.compactions` | Number of table compactions
-`rocksdb.flushes` | Number of table flushes
-`rocksdb.memtable.total-size` | Current size of memtable in bytes
-`rocksdb.num-sstables` | Number of storage engine SSTables
-`rocksdb.read-amplification` | Number of disk reads per query
-`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks
-`round-trip-latency` | Distribution of round-trip latencies with other nodes
-`sql.bytesin` | Number of sql bytes received
-`sql.bytesout` | Number of sql bytes sent
-`sql.conn.latency` | Latency to establish and authenticate a SQL connection
-`sql.conns` | Number of active sql connections
-`sql.ddl.count` | Number of SQL DDL statements successfully executed
-`sql.delete.count` | Number of SQL DELETE statements successfully executed
-`sql.distsql.contended_queries.count` | Number of SQL queries that experienced contention
-`sql.distsql.exec.latency` | Latency of DistSQL statement execution
-`sql.distsql.flows.active` | Number of distributed SQL flows currently active
-`sql.distsql.flows.total` | Number of distributed SQL flows executed
-`sql.distsql.queries.active` | Number of SQL queries currently active
-`sql.distsql.queries.total` | Number of SQL queries executed
-`sql.distsql.select.count` | Number of DistSQL SELECT statements
-`sql.distsql.service.latency` | Latency of DistSQL request execution
-`sql.exec.latency` | Latency of SQL statement execution
-`sql.failure.count` | Number of statements resulting in a planning or runtime error
-`sql.full.scan.count` | Number of full table or index scans
-`sql.insert.count` | Number of SQL INSERT statements successfully executed
-`sql.mem.distsql.current` | Current sql statement memory usage for distsql
-`sql.mem.distsql.max` | Memory usage per sql statement for distsql
-`sql.mem.internal.session.current` | Current sql session memory usage for internal
-`sql.mem.internal.session.max` | Memory usage per sql session for internal
-`sql.mem.internal.txn.current` | Current sql transaction memory usage for internal
-`sql.mem.internal.txn.max` | Memory usage per sql transaction for internal
-`sql.misc.count` | Number of other SQL statements successfully executed
-`sql.query.count` | Number of SQL queries executed
-`sql.select.count` | Number of SQL SELECT statements successfully executed
-`sql.service.latency` | Latency of SQL request execution
-`sql.statements.active` | Number of currently active user SQL statements
-`sql.txn.abort.count` | Number of SQL transaction abort errors
-`sql.txn.begin.count` | Number of SQL transaction BEGIN statements successfully executed
-`sql.txn.commit.count` | Number of SQL transaction COMMIT statements successfully executed
-`sql.txn.latency` | Latency of SQL transactions
-`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements successfully executed
-`sql.txns.open` | Number of currently open user SQL transactions
-`sql.update.count` | Number of SQL UPDATE statements successfully executed
-`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo
-`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released
-`sys.cgocalls` | Total number of cgo calls
-`sys.cpu.combined.percent-normalized` | Current user+system cpu percentage, normalized 0-1 by number of cores
-`sys.cpu.sys.ns` | Total system cpu time
-`sys.cpu.sys.percent` | Current system cpu percentage
-`sys.cpu.user.ns` | Total user cpu time
-`sys.cpu.user.percent` | Current user cpu percentage
-`sys.fd.open` | Process open file descriptors
-`sys.fd.softlimit` | Process open FD soft limit
-`sys.gc.count` | Total number of GC runs
-`sys.gc.pause.ns` | Total GC pause
-`sys.gc.pause.percent` | Current GC pause percentage
-`sys.go.allocbytes` | Current bytes of memory allocated by go
-`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released
-`sys.goroutines` | Current number of goroutines
-`sys.host.net.recv.bytes` | Bytes received on all network interfaces since this process started
-`sys.host.net.send.bytes` | Bytes sent on all network interfaces since this process started
-`sys.rss` | Current process RSS
-`sys.uptime` | Process uptime
-`sysbytes` | Number of bytes in system KV pairs
-`syscount` | Count of system KV pairs
-`timeseries.write.bytes` | Total size in bytes of metric samples written to disk
-`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk
-`timeseries.write.samples` | Total number of metric samples written to disk
-`totalbytes` | Total number of bytes taken up by keys and values including non-live data
-`txn.aborts` | Number of aborted KV transactions
-`txn.commits1PC` | Number of KV transaction one-phase commit attempts
-`txn.commits` | Number of committed KV transactions (including 1PC)
-`txn.durations` | KV transaction durations
-`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE
-`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first
-`txn.restarts` | Number of restarted KV transactions
-`valbytes` | Number of bytes taken up by values
-`valcount` | Count of all values
diff --git a/src/current/_includes/v22.1/metric-names.md b/src/current/_includes/v22.1/metric-names.md
deleted file mode 100644
index 84074c0b373..00000000000
--- a/src/current/_includes/v22.1/metric-names.md
+++ /dev/null
@@ -1,256 +0,0 @@
-Name | Description
------|------------
-`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas)
-`addsstable.copies` | Number of SSTable ingestions that required copying files during application
-`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders)
-`build.timestamp` | Build information
-`capacity.available` | Available storage capacity
-`capacity.reserved` | Capacity reserved for snapshots
-`capacity.used` | Used storage capacity
-`capacity` | Total storage capacity
-`changefeed.failures` | Total number of changefeed jobs which have failed
-`changefeed.running` | Number of currently running changefeeds, including sinkless
-`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds
-`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds
-`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges
-`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine
-`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine
-`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions
-`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue
-`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted
-`distsender.batches.partial` | Number of partial batches processed
-`distsender.batches` | Number of batches processed
-`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered
-`distsender.rpc.sent.local` | Number of local RPCs sent
-`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors
-`distsender.rpc.sent` | Number of RPCs sent
-`exec.error` | Number of batch KV requests that failed to execute on this node
-`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node
-`exec.success` | Number of batch KV requests executed successfully on this node
-`gcbytesage` | Cumulative age of non-live data in seconds
-`gossip.bytes.received` | Number of received gossip bytes
-`gossip.bytes.sent` | Number of sent gossip bytes
-`gossip.connections.incoming` | Number of active incoming gossip connections
-`gossip.connections.outgoing` | Number of active outgoing gossip connections
-`gossip.connections.refused` | Number of refused incoming gossip connections
-`gossip.infos.received` | Number of received gossip Info objects
-`gossip.infos.sent` | Number of sent gossip Info objects
-`intentage` | Cumulative age of intents in seconds
-`intentbytes` | Number of bytes in intent KV pairs
-`intentcount` | Count of intent keys
-`keybytes` | Number of bytes taken up by keys
-`keycount` | Count of all keys
-`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated
-`leases.epoch` | Number of replica leaseholders using epoch-based leases
-`leases.error` | Number of failed lease requests
-`leases.expiration` | Number of replica leaseholders using expiration-based leases
-`leases.success` | Number of successful lease requests
-`leases.transfers.error` | Number of failed lease transfers
-`leases.transfers.success` | Number of successful lease transfers
-`livebytes` | Number of bytes of live data (keys plus values), including unreplicated data
-`livecount` | Count of live keys
-`liveness.epochincrements` | Number of times this node has incremented its liveness epoch
-`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node
-`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds
-`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node
-`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live)
-`node-id` | node ID with labels for advertised RPC and HTTP addresses
-`queue.consistency.pending` | Number of pending replicas in the consistency checker queue
-`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue
-`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue
-`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue
-`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal
-`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal
-`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine
-`queue.gc.info.intentsconsidered` | Number of 'old' intents
-`queue.gc.info.intenttxns` | Number of associated distinct transactions
-`queue.gc.info.numkeysaffected` | Number of keys with GC'able data
-`queue.gc.info.pushtxn` | Number of attempted pushes
-`queue.gc.info.resolvesuccess` | Number of successful intent resolutions
-`queue.gc.info.resolvetotal` | Number of attempted intent resolutions
-`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns
-`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns
-`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns
-`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine
-`queue.gc.pending` | Number of pending replicas in the GC queue
-`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue
-`queue.gc.process.success` | Number of replicas successfully processed by the GC queue
-`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue
-`queue.raftlog.pending` | Number of pending replicas in the Raft log queue
-`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue
-`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue
-`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue
-`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue
-`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue
-`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue
-`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue
-`queue.replicagc.pending` | Number of pending replicas in the replica GC queue
-`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue
-`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue
-`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue
-`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue
-`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue
-`queue.replicate.pending` | Number of pending replicas in the replicate queue
-`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue
-`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue
-`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue
-`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options
-`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue
-`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage)
-`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition)
-`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue
-`queue.split.pending` | Number of pending replicas in the split queue
-`queue.split.process.failure` | Number of replicas which failed processing in the split queue
-`queue.split.process.success` | Number of replicas successfully processed by the split queue
-`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue
-`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue
-`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue
-`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue
-`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue
-`raft.commandsapplied` | Count of Raft commands applied
-`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue
-`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced
-`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands
-`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries
-`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick()
-`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working
-`raft.rcvd.app` | Number of MsgApp messages received by this store
-`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store
-`raft.rcvd.dropped` | Number of dropped incoming Raft messages
-`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store
-`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store
-`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store
-`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store
-`raft.rcvd.prop` | Number of MsgProp messages received by this store
-`raft.rcvd.snap` | Number of MsgSnap messages received by this store
-`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store
-`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store
-`raft.rcvd.vote` | Number of MsgVote messages received by this store
-`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store
-`raft.ticks` | Number of Raft ticks queued
-`raftlog.behind` | Number of Raft log entries followers on other stores are behind
-`raftlog.truncated` | Number of Raft log entries truncated
-`range.adds` | Number of range additions
-`range.raftleadertransfers` | Number of Raft leader transfers
-`range.removes` | Number of range removals
-`range.snapshots.generated` | Number of generated snapshots
-`range.snapshots.normal-applied` | Number of applied snapshots
-`range.snapshots.preemptive-applied` | Number of applied preemptive snapshots
-`range.snapshots.rcvd-bytes` | Number of snapshot bytes received
-`range.snapshots.sent-bytes` | Number of snapshot bytes sent
-`range.splits` | Number of range splits
-`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum
-`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target
-`ranges` | Number of ranges
-`rebalancing.writespersecond` | Number of keys written (i.e., applied by Raft) per second to the store, averaged over a large time period as used in rebalancing decisions
-`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined
-`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined
-`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined
-`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue
-`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue
-`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue
-`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree
-`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue
-`replicas.leaders_invalid_lease` | Number of replicas that are Raft leaders whose lease is invalid
-`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store
-`replicas.leaders` | Number of Raft leaders
-`replicas.leaseholders` | Number of lease holders
-`replicas.quiescent` | Number of quiesced replicas
-`replicas.reserved` | Number of replicas reserved for snapshots
-`replicas` | Number of replicas
-`requests.backpressure.split` | Number of backpressured writes waiting on a Range split
-`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue
-`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender
-`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease
-`requests.slow.raft` | Number of requests that have been stuck for a long time in Raft
-`rocksdb.block.cache.hits` | Count of block cache hits
-`rocksdb.block.cache.misses` | Count of block cache misses
-`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache
-`rocksdb.block.cache.usage` | Bytes used by the block cache
-`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked
-`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation
-`rocksdb.compactions` | Number of table compactions
-`rocksdb.flushes` | Number of table flushes
-`rocksdb.memtable.total-size` | Current size of memtable in bytes
-`rocksdb.num-sstables` | Number of storage engine SSTables
-`rocksdb.read-amplification` | Number of disk reads per query
-`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks
-`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds
-`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error.
-`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error.
-`sql.bytesin` | Number of sql bytes received
-`sql.bytesout` | Number of sql bytes sent
-`sql.conns` | Number of active sql connections
-`sql.ddl.count` | Number of SQL DDL statements
-`sql.delete.count` | Number of SQL DELETE statements
-`sql.distsql.exec.latency` | Latency in nanoseconds of SQL statement executions running on the distributed execution engine. This metric does not include the time to parse and plan the statement.
-`sql.distsql.flows.active` | Number of distributed SQL flows currently active
-`sql.distsql.flows.total` | Number of distributed SQL flows executed
-`sql.distsql.queries.active` | Number of distributed SQL queries currently active
-`sql.distsql.queries.total` | Number of distributed SQL queries executed
-`sql.distsql.select.count` | Number of DistSQL SELECT statements
-`sql.distsql.service.latency` | Latency in nanoseconds of SQL statement executions running on the distributed execution engine, including the time to parse and plan the statement.
-`sql.exec.latency` | Latency in nanoseconds of all SQL statement executions. This metric does not include the time to parse and plan the statement.
-`sql.guardrails.max_row_size_err.count` | Number of times a large row violates the corresponding `sql.guardrails.max_row_size_err` limit.
-`sql.guardrails.max_row_size_log.count` | Number of times a large row violates the corresponding `sql.guardrails.max_row_size_log` limit.
-`sql.insert.count` | Number of SQL INSERT statements
-`sql.mem.current` | Current sql statement memory usage
-`sql.mem.distsql.current` | Current sql statement memory usage for distsql
-`sql.mem.distsql.max` | Memory usage per sql statement for distsql
-`sql.mem.max` | Memory usage per sql statement
-`sql.mem.session.current` | Current sql session memory usage
-`sql.mem.session.max` | Memory usage per sql session
-`sql.mem.txn.current` | Current sql transaction memory usage
-`sql.mem.txn.max` | Memory usage per sql transaction
-`sql.misc.count` | Number of other SQL statements
-`sql.pgwire_cancel.total` | Counter of the number of pgwire query cancel requests
-`sql.pgwire_cancel.ignored` | Counter of the number of pgwire query cancel requests that were ignored due to rate limiting
-`sql.pgwire_cancel.successful` | Counter of the number of pgwire query cancel requests that were successful
-`sql.query.count` | Number of SQL queries
-`sql.select.count` | Number of SQL SELECT statements
-`sql.service.latency` | Latency in nanoseconds of SQL request execution, including the time to parse and plan the statement.
-`sql.txn.abort.count` | Number of SQL transaction ABORT statements
-`sql.txn.begin.count` | Number of SQL transaction BEGIN statements
-`sql.txn.commit.count` | Number of SQL transaction COMMIT statements
-`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements
-`sql.update.count` | Number of SQL UPDATE statements
-`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo
-`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released
-`sys.cgocalls` | Total number of cgo call
-`sys.cpu.sys.ns` | Total system cpu time in nanoseconds
-`sys.cpu.sys.percent` | Current system cpu percentage
-`sys.cpu.user.ns` | Total user cpu time in nanoseconds
-`sys.cpu.user.percent` | Current user cpu percentage
-`sys.fd.open` | Process open file descriptors
-`sys.fd.softlimit` | Process open FD soft limit
-`sys.gc.count` | Total number of GC runs
-`sys.gc.pause.ns` | Total GC pause in nanoseconds
-`sys.gc.pause.percent` | Current GC pause percentage
-`sys.go.allocbytes` | Current bytes of memory allocated by go
-`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released
-`sys.goroutines` | Current number of goroutines
-`sys.rss` | Current process RSS
-`sys.uptime` | Process uptime in seconds
-`sysbytes` | Number of bytes in system KV pairs
-`syscount` | Count of system KV pairs
-`timeseries.write.bytes` | Total size in bytes of metric samples written to disk
-`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk
-`timeseries.write.samples` | Total number of metric samples written to disk
-`totalbytes` | Total number of bytes taken up by keys and values including non-live data
-`tscache.skl.read.pages` | Number of pages in the read timestamp cache
-`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache
-`tscache.skl.write.pages` | Number of pages in the write timestamp cache
-`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache
-`txn.abandons` | Number of abandoned KV transactions
-`txn.aborts` | Number of aborted KV transactions
-`txn.autoretries` | Number of automatic retries to avoid serializable restarts
-`txn.commits1PC` | Number of committed one-phase KV transactions
-`txn.commits` | Number of committed KV transactions (including 1PC)
-`txn.durations` | KV transaction durations in nanoseconds
-`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command
-`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer
-`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE
-`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first
-`txn.restarts` | Number of restarted KV transactions
-`valbytes` | Number of bytes taken up by values
-`valcount` | Count of all values
diff --git a/src/current/_includes/v22.1/misc/available-capacity-metric.md b/src/current/_includes/v22.1/misc/available-capacity-metric.md
deleted file mode 100644
index 61dbcb9cbf2..00000000000
--- a/src/current/_includes/v22.1/misc/available-capacity-metric.md
+++ /dev/null
@@ -1 +0,0 @@
-If you are testing your deployment locally with multiple CockroachDB nodes running on a single machine (this is [not recommended in production](recommended-production-settings.html#topology)), you must explicitly [set the store size](cockroach-start.html#store) per node in order to display the correct capacity. Otherwise, the machine's actual disk capacity will be counted as a separate store for each node, thus inflating the computed capacity.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/misc/aws-locations.md b/src/current/_includes/v22.1/misc/aws-locations.md
deleted file mode 100644
index 8b073c1f230..00000000000
--- a/src/current/_includes/v22.1/misc/aws-locations.md
+++ /dev/null
@@ -1,18 +0,0 @@
-| Location | SQL Statement |
-| ------ | ------ |
-| US East (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east-1', 37.478397, -76.453077)`|
-| US East (Ohio) | `INSERT into system.locations VALUES ('region', 'us-east-2', 40.417287, -76.453077)` |
-| US West (N. California) | `INSERT into system.locations VALUES ('region', 'us-west-1', 38.837522, -120.895824)` |
-| US West (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west-2', 43.804133, -120.554201)` |
-| Canada (Central) | `INSERT into system.locations VALUES ('region', 'ca-central-1', 56.130366, -106.346771)` |
-| EU (Frankfurt) | `INSERT into system.locations VALUES ('region', 'eu-central-1', 50.110922, 8.682127)` |
-| EU (Ireland) | `INSERT into system.locations VALUES ('region', 'eu-west-1', 53.142367, -7.692054)` |
-| EU (London) | `INSERT into system.locations VALUES ('region', 'eu-west-2', 51.507351, -0.127758)` |
-| EU (Paris) | `INSERT into system.locations VALUES ('region', 'eu-west-3', 48.856614, 2.352222)` |
-| Asia Pacific (Tokyo) | `INSERT into system.locations VALUES ('region', 'ap-northeast-1', 35.689487, 139.691706)` |
-| Asia Pacific (Seoul) | `INSERT into system.locations VALUES ('region', 'ap-northeast-2', 37.566535, 126.977969)` |
-| Asia Pacific (Osaka-Local) | `INSERT into system.locations VALUES ('region', 'ap-northeast-3', 34.693738, 135.502165)` |
-| Asia Pacific (Singapore) | `INSERT into system.locations VALUES ('region', 'ap-southeast-1', 1.352083, 103.819836)` |
-| Asia Pacific (Sydney) | `INSERT into system.locations VALUES ('region', 'ap-southeast-2', -33.86882, 151.209296)` |
-| Asia Pacific (Mumbai) | `INSERT into system.locations VALUES ('region', 'ap-south-1', 19.075984, 72.877656)` |
-| South America (São Paulo) | `INSERT into system.locations VALUES ('region', 'sa-east-1', -23.55052, -46.633309)` |
diff --git a/src/current/_includes/v22.1/misc/azure-env-param.md b/src/current/_includes/v22.1/misc/azure-env-param.md
deleted file mode 100644
index 29b5cb04f2d..00000000000
--- a/src/current/_includes/v22.1/misc/azure-env-param.md
+++ /dev/null
@@ -1 +0,0 @@
-The [Azure environment](https://learn.microsoft.com/en-us/azure/deployment-environments/concept-environments-key-concepts#environments) that the storage account belongs to. The accepted values are: `AZURECHINACLOUD`, `AZUREGERMANCLOUD`, `AZUREPUBLICCLOUD`, and [`AZUREUSGOVERNMENTCLOUD`](https://learn.microsoft.com/en-us/azure/azure-government/documentation-government-developer-guide). These are cloud environments that meet security, compliance, and data privacy requirements for the respective instance of Azure cloud. If the parameter is not specified, it will default to `AZUREPUBLICCLOUD`.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/misc/azure-locations.md b/src/current/_includes/v22.1/misc/azure-locations.md
deleted file mode 100644
index 7119ff8b7cb..00000000000
--- a/src/current/_includes/v22.1/misc/azure-locations.md
+++ /dev/null
@@ -1,30 +0,0 @@
-| Location | SQL Statement |
-| -------- | ------------- |
-| eastasia (East Asia) | `INSERT into system.locations VALUES ('region', 'eastasia', 22.267, 114.188)` |
-| southeastasia (Southeast Asia) | `INSERT into system.locations VALUES ('region', 'southeastasia', 1.283, 103.833)` |
-| centralus (Central US) | `INSERT into system.locations VALUES ('region', 'centralus', 41.5908, -93.6208)` |
-| eastus (East US) | `INSERT into system.locations VALUES ('region', 'eastus', 37.3719, -79.8164)` |
-| eastus2 (East US 2) | `INSERT into system.locations VALUES ('region', 'eastus2', 36.6681, -78.3889)` |
-| westus (West US) | `INSERT into system.locations VALUES ('region', 'westus', 37.783, -122.417)` |
-| northcentralus (North Central US) | `INSERT into system.locations VALUES ('region', 'northcentralus', 41.8819, -87.6278)` |
-| southcentralus (South Central US) | `INSERT into system.locations VALUES ('region', 'southcentralus', 29.4167, -98.5)` |
-| northeurope (North Europe) | `INSERT into system.locations VALUES ('region', 'northeurope', 53.3478, -6.2597)` |
-| westeurope (West Europe) | `INSERT into system.locations VALUES ('region', 'westeurope', 52.3667, 4.9)` |
-| japanwest (Japan West) | `INSERT into system.locations VALUES ('region', 'japanwest', 34.6939, 135.5022)` |
-| japaneast (Japan East) | `INSERT into system.locations VALUES ('region', 'japaneast', 35.68, 139.77)` |
-| brazilsouth (Brazil South) | `INSERT into system.locations VALUES ('region', 'brazilsouth', -23.55, -46.633)` |
-| australiaeast (Australia East) | `INSERT into system.locations VALUES ('region', 'australiaeast', -33.86, 151.2094)` |
-| australiasoutheast (Australia Southeast) | `INSERT into system.locations VALUES ('region', 'australiasoutheast', -37.8136, 144.9631)` |
-| southindia (South India) | `INSERT into system.locations VALUES ('region', 'southindia', 12.9822, 80.1636)` |
-| centralindia (Central India) | `INSERT into system.locations VALUES ('region', 'centralindia', 18.5822, 73.9197)` |
-| westindia (West India) | `INSERT into system.locations VALUES ('region', 'westindia', 19.088, 72.868)` |
-| canadacentral (Canada Central) | `INSERT into system.locations VALUES ('region', 'canadacentral', 43.653, -79.383)` |
-| canadaeast (Canada East) | `INSERT into system.locations VALUES ('region', 'canadaeast', 46.817, -71.217)` |
-| uksouth (UK South) | `INSERT into system.locations VALUES ('region', 'uksouth', 50.941, -0.799)` |
-| ukwest (UK West) | `INSERT into system.locations VALUES ('region', 'ukwest', 53.427, -3.084)` |
-| westcentralus (West Central US) | `INSERT into system.locations VALUES ('region', 'westcentralus', 40.890, -110.234)` |
-| westus2 (West US 2) | `INSERT into system.locations VALUES ('region', 'westus2', 47.233, -119.852)` |
-| koreacentral (Korea Central) | `INSERT into system.locations VALUES ('region', 'koreacentral', 37.5665, 126.9780)` |
-| koreasouth (Korea South) | `INSERT into system.locations VALUES ('region', 'koreasouth', 35.1796, 129.0756)` |
-| francecentral (France Central) | `INSERT into system.locations VALUES ('region', 'francecentral', 46.3772, 2.3730)` |
-| francesouth (France South) | `INSERT into system.locations VALUES ('region', 'francesouth', 43.8345, 2.1972)` |
diff --git a/src/current/_includes/v22.1/misc/basic-terms.md b/src/current/_includes/v22.1/misc/basic-terms.md
deleted file mode 100644
index 231e29af81f..00000000000
--- a/src/current/_includes/v22.1/misc/basic-terms.md
+++ /dev/null
@@ -1,12 +0,0 @@
-## CockroachDB architecture terms
-
-Term | Definition
------|------------
-**cluster** | A group of interconnected storage nodes that collaboratively organize transactions, fault tolerance, and data rebalancing.
-**node** | An individual instance of CockroachDB. One or more nodes form a cluster.
-**range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a sorted map of key-value pairs. This keyspace is divided into contiguous chunks called _ranges_, such that every key is found in one range.
From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the _primary index_ because the table is sorted by the primary key) or a single row in a secondary index. As soon as the size of a range reaches 512 MiB ([the default](../configure-replication-zones.html#range-max-bytes)), it is split into two ranges. This process continues for these new ranges as the table and its indexes continue growing.
-**replica** | A copy of a range stored on a node. By default, there are three [replicas](../configure-replication-zones.html#num_replicas) of each range on different nodes.
-**leaseholder** | The replica that holds the "range lease." This replica receives and coordinates all read and write requests for the range.
For most types of tables and queries, the leaseholder is the only replica that can serve consistent reads (reads that return "the latest" data).
-**Raft protocol** | The [consensus protocol](replication-layer.html#raft) employed in CockroachDB that ensures that your data is safely stored on multiple nodes and that those nodes agree on the current state even if some of them are temporarily disconnected.
-**Raft leader** | For each range, the replica that is the "leader" for write requests. The leader uses the Raft protocol to ensure that a majority of replicas (the leader and enough followers) agree, based on their Raft logs, before committing the write. The Raft leader is almost always the same replica as the leaseholder.
-**Raft log** | A time-ordered log of writes to a range that its replicas have agreed on. This log exists on-disk with each replica and is the range's source of truth for consistent replication.
diff --git a/src/current/_includes/v22.1/misc/beta-release-warning.md b/src/current/_includes/v22.1/misc/beta-release-warning.md
deleted file mode 100644
index c228f650d04..00000000000
--- a/src/current/_includes/v22.1/misc/beta-release-warning.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-Beta releases are intended for testing and experimentation only. Beta releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/misc/beta-warning.md b/src/current/_includes/v22.1/misc/beta-warning.md
deleted file mode 100644
index 107fc2bfa4b..00000000000
--- a/src/current/_includes/v22.1/misc/beta-warning.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-**This is a beta feature.** It is currently undergoing continued testing. Please [file a Github issue](file-an-issue.html) with us if you identify a bug.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/misc/chrome-localhost.md b/src/current/_includes/v22.1/misc/chrome-localhost.md
deleted file mode 100644
index d794ff339d0..00000000000
--- a/src/current/_includes/v22.1/misc/chrome-localhost.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-If you are using Google Chrome, and you are getting an error about not being able to reach `localhost` because its certificate has been revoked, go to chrome://flags/#allow-insecure-localhost, enable "Allow invalid certificates for resources loaded from localhost", and then restart the browser. Enabling this Chrome feature degrades security for all sites running on `localhost`, not just CockroachDB's DB Console, so be sure to enable the feature only temporarily.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/misc/client-side-intervention-example.md b/src/current/_includes/v22.1/misc/client-side-intervention-example.md
deleted file mode 100644
index d0bbfc33695..00000000000
--- a/src/current/_includes/v22.1/misc/client-side-intervention-example.md
+++ /dev/null
@@ -1,28 +0,0 @@
-The Python-like pseudocode below shows how to implement an application-level retry loop; it does not require your driver or ORM to implement [advanced retry handling logic](advanced-client-side-transaction-retries.html), so it can be used from any programming language or environment. In particular, your retry loop must:
-
-- Raise an error if the `max_retries` limit is reached
-- Retry on `40001` error codes
-- [`COMMIT`](commit-transaction.html) at the end of the `try` block
-- Implement [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) logic as shown below for best performance
-
-~~~ python
-while true:
- n++
- if n == max_retries:
- throw Error("did not succeed within N retries")
- try:
- # add logic here to run all your statements
- conn.exec('COMMIT')
- break
- catch error:
- if error.code != "40001":
- throw error
- else:
- # This is a retry error, so we roll back the current transaction
- # and sleep for a bit before retrying. The sleep time increases
- # for each failed transaction. Adapted from
- # https://colintemple.com/2017/03/java-exponential-backoff/
- conn.exec('ROLLBACK');
- sleep_ms = int(((2**n) * 100) + rand( 100 - 1 ) + 1)
- sleep(sleep_ms) # Assumes your sleep() takes milliseconds
-~~~
diff --git a/src/current/_includes/v22.1/misc/csv-import-callout.md b/src/current/_includes/v22.1/misc/csv-import-callout.md
deleted file mode 100644
index 60555c5d0b6..00000000000
--- a/src/current/_includes/v22.1/misc/csv-import-callout.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-The column order in your schema must match the column order in the file being imported.
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/misc/customizing-the-savepoint-name.md b/src/current/_includes/v22.1/misc/customizing-the-savepoint-name.md
deleted file mode 100644
index ed895f906f3..00000000000
--- a/src/current/_includes/v22.1/misc/customizing-the-savepoint-name.md
+++ /dev/null
@@ -1,5 +0,0 @@
-Set the `force_savepoint_restart` [session variable](set-vars.html#supported-variables) to `true` to enable using a custom name for the [retry savepoint](advanced-client-side-transaction-retries.html#retry-savepoints).
-
-Once this variable is set, the [`SAVEPOINT`](savepoint.html) statement will accept any name for the retry savepoint, not just `cockroach_restart`. In addition, it causes every savepoint name to be equivalent to `cockroach_restart`, therefore disallowing the use of [nested transactions](transactions.html#nested-transactions).
-
-This feature exists to support applications that want to use the [advanced client-side transaction retry protocol](advanced-client-side-transaction-retries.html), but cannot customize the name of savepoints to be `cockroach_restart`. For example, this may be necessary because you are using an ORM that requires its own names for savepoints.
diff --git a/src/current/_includes/v22.1/misc/database-terms.md b/src/current/_includes/v22.1/misc/database-terms.md
deleted file mode 100644
index 11d9bd67c92..00000000000
--- a/src/current/_includes/v22.1/misc/database-terms.md
+++ /dev/null
@@ -1,10 +0,0 @@
-## Database terms
-
-Term | Definition
------|-----------
-**consistency** | The requirement that a transaction must change affected data only in allowed ways. CockroachDB uses "consistency" in both the sense of [ACID semantics](https://en.wikipedia.org/wiki/ACID) and the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), albeit less formally than either definition.
-**isolation** | The degree to which a transaction may be affected by other transactions running at the same time. CockroachDB provides the [`SERIALIZABLE`](https://en.wikipedia.org/wiki/Serializability) isolation level, which is the highest possible and guarantees that every committed transaction has the same result as if each transaction were run one at a time.
-**consensus** | The process of reaching agreement on whether a transaction is committed or aborted. CockroachDB uses the [Raft consensus protocol](#architecture-raft). In CockroachDB, when a range receives a write, a quorum of nodes containing replicas of the range acknowledge the write. This means your data is safely stored and a majority of nodes agree on the database's current state, even if some of the nodes are offline.
When a write does not achieve consensus, forward progress halts to maintain consistency within the cluster.
-**replication** | The process of creating and distributing copies of data, as well as ensuring that those copies remain consistent. CockroachDB requires all writes to propagate to a [quorum](https://en.wikipedia.org/wiki/Quorum_%28distributed_computing%29) of copies of the data before being considered committed. This ensures the consistency of your data.
-**transaction** | A set of operations performed on a database that satisfy the requirements of [ACID semantics](https://en.wikipedia.org/wiki/ACID). This is a crucial feature for a consistent system to ensure developers can trust the data in their database. For more information about how transactions work in CockroachDB, see [Transaction Layer](transaction-layer.html).
-**multi-active availability** | A consensus-based notion of high availability that lets each node in the cluster handle reads and writes for a subset of the stored data (on a per-range basis). This is in contrast to _active-passive replication_, in which the active node receives 100% of request traffic, and _active-active_ replication, in which all nodes accept requests but typically cannot guarantee that reads are both up-to-date and fast.
diff --git a/src/current/_includes/v22.1/misc/debug-subcommands.md b/src/current/_includes/v22.1/misc/debug-subcommands.md
deleted file mode 100644
index 4f6f7d1c678..00000000000
--- a/src/current/_includes/v22.1/misc/debug-subcommands.md
+++ /dev/null
@@ -1,5 +0,0 @@
-While the `cockroach debug` command has a few subcommands, users are expected to use only the [`zip`](cockroach-debug-zip.html), [`encryption-active-key`](cockroach-debug-encryption-active-key.html), [`merge-logs`](cockroach-debug-merge-logs.html), [`list-files`](cockroach-debug-list-files.html), [`tsdump`](cockroach-debug-tsdump.html), and [`ballast`](cockroach-debug-ballast.html) subcommands.
-
-We recommend using the [`job-trace`](cockroach-debug-job-trace.html) subcommand only when directed by the [Cockroach Labs support team](support-resources.html).
-
-The other `debug` subcommands are useful only to CockroachDB's developers and contributors.
diff --git a/src/current/_includes/v22.1/misc/declarative-schema-changer-note.md b/src/current/_includes/v22.1/misc/declarative-schema-changer-note.md
deleted file mode 100644
index fa0f2d33ed8..00000000000
--- a/src/current/_includes/v22.1/misc/declarative-schema-changer-note.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-`{{ page.title }}` now uses the [declarative schema changer](online-schema-changes.html#declarative-schema-changer) by default. Declarative schema changer statements and legacy schema changer statements operating on the same objects cannot exist within the same transaction. Either split the transaction into multiple transactions, or disable either the `sql.defaults.use_declarative_schema_changer` [cluster setting](cluster-settings.html) or the `use_declarative_schema_changer` [session variable](set-vars.html).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/misc/delete-statistics.md b/src/current/_includes/v22.1/misc/delete-statistics.md
deleted file mode 100644
index 3e4c71db3ec..00000000000
--- a/src/current/_includes/v22.1/misc/delete-statistics.md
+++ /dev/null
@@ -1,15 +0,0 @@
-To delete statistics for all tables in all databases:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-DELETE FROM system.table_statistics WHERE true;
-~~~
-
-To delete a named set of statistics (e.g, one named "users_stats"), run a query like the following:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-DELETE FROM system.table_statistics WHERE name = 'users_stats';
-~~~
-
-For more information about the `DELETE` statement, see [`DELETE`](delete.html).
diff --git a/src/current/_includes/v22.1/misc/diagnostics-callout.html b/src/current/_includes/v22.1/misc/diagnostics-callout.html
deleted file mode 100644
index a969a8cf152..00000000000
--- a/src/current/_includes/v22.1/misc/diagnostics-callout.html
+++ /dev/null
@@ -1 +0,0 @@
-{{site.data.alerts.callout_info}}By default, each node of a CockroachDB cluster periodically shares anonymous usage details with Cockroach Labs. For an explanation of the details that get shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/misc/enterprise-features.md b/src/current/_includes/v22.1/misc/enterprise-features.md
deleted file mode 100644
index 9534c0ce442..00000000000
--- a/src/current/_includes/v22.1/misc/enterprise-features.md
+++ /dev/null
@@ -1,21 +0,0 @@
-## Cluster optimization
-
-Feature | Description
---------+-------------------------
-[Follower Reads](follower-reads.html) | Reduce read latency in multi-region deployments by using the closest replica at the expense of reading slightly historical data.
-[Multi-Region Capabilities](multiregion-overview.html) | Row-level control over where your data is stored to help you reduce read and write latency and meet regulatory requirements.
-[Node Map](enable-node-map.html) | Visualize the geographical distribution of a cluster by plotting its node localities on a world map.
-
-## Recovery and streaming
-
-Feature | Description
---------+-------------------------
-Enterprise [`BACKUP`](backup.html) and restore capabilities | Taking and restoring [incremental backups](take-full-and-incremental-backups.html), [backups with revision history](take-backups-with-revision-history-and-restore-from-a-point-in-time.html), [locality-aware backups](take-and-restore-locality-aware-backups.html), and [encrypted backups](take-and-restore-encrypted-backups.html) require an Enterprise license. [Full backups](take-full-and-incremental-backups.html) do not require an Enterprise license.
-[Changefeeds into a Configurable Sink](create-changefeed.html) | For every change in a configurable allowlist of tables, configure a changefeed to emit a record to a configurable sink: Apache Kafka, cloud storage, Google Cloud Pub/Sub, or a webhook sink. These records can be processed by downstream systems for reporting, caching, or full-text indexing.
-
-## Security and IAM
-
-Feature | Description
---------+-------------------------
-[Encryption at Rest](security-reference/encryption.html#encryption-at-rest-enterprise) | Enable automatic transparent encryption of a node's data on the local disk using AES in counter mode, with all key sizes allowed. This feature works together with CockroachDB's automatic encryption of data in transit.
-[GSSAPI with Kerberos Authentication](gssapi_authentication.html) | Authenticate to your cluster using identities stored in an external enterprise directory system that supports Kerberos, such as Active Directory.
diff --git a/src/current/_includes/v22.1/misc/explore-benefits-see-also.md b/src/current/_includes/v22.1/misc/explore-benefits-see-also.md
deleted file mode 100644
index 6b1a3afed71..00000000000
--- a/src/current/_includes/v22.1/misc/explore-benefits-see-also.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- [Replication & Rebalancing](demo-replication-and-rebalancing.html)
-- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
-- [Low Latency Multi-Region Deployment](demo-low-latency-multi-region-deployment.html)
-- [Serializable Transactions](demo-serializable.html)
-- [Cross-Cloud Migration](demo-automatic-cloud-migration.html)
-- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html)
-- [JSON Support](demo-json-support.html)
diff --git a/src/current/_includes/v22.1/misc/force-index-selection.md b/src/current/_includes/v22.1/misc/force-index-selection.md
deleted file mode 100644
index 5a14daa6f2a..00000000000
--- a/src/current/_includes/v22.1/misc/force-index-selection.md
+++ /dev/null
@@ -1,145 +0,0 @@
-By using the explicit index annotation, you can override [CockroachDB's index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) and use a specific [index](indexes.html) when reading from a named table.
-
-{{site.data.alerts.callout_info}}
-Index selection can impact [performance](performance-best-practices-overview.html), but does not change the result of a query.
-{{site.data.alerts.end}}
-
-##### Force index scan
-
-The syntax to force a scan of a specific index is:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM table@my_idx;
-~~~
-
-This is equivalent to the longer expression:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM table@{FORCE_INDEX=my_idx};
-~~~
-
-##### Force reverse scan
-
-The syntax to force a reverse scan of a specific index is:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM table@{FORCE_INDEX=my_idx,DESC};
-~~~
-
-Forcing a reverse scan is sometimes useful during [performance tuning](performance-best-practices-overview.html). For reference, the full syntax for choosing an index and its scan direction is
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SELECT * FROM table@{FORCE_INDEX=idx[,DIRECTION]}
-~~~
-
-where the optional `DIRECTION` is either `ASC` (ascending) or `DESC` (descending).
-
-When a direction is specified, that scan direction is forced; otherwise the [cost-based optimizer](cost-based-optimizer.html) is free to choose the direction it calculates will result in the best performance.
-
-You can verify that the optimizer is choosing your desired scan direction using [`EXPLAIN (OPT)`](explain.html#opt-option). For example, given the table
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE kv (K INT PRIMARY KEY, v INT);
-~~~
-
-you can check the scan direction with:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> EXPLAIN (opt) SELECT * FROM users@{FORCE_INDEX=primary,DESC};
-~~~
-
-~~~
- text
-+-------------------------------------+
- scan users,rev
- └── flags: force-index=primary,rev
-(2 rows)
-~~~
-
-##### Force partial index scan
-
-To force a [partial index scan](partial-indexes.html), your statement must have a `WHERE` clause that implies the partial index filter.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-CREATE TABLE t (
- a INT,
- INDEX idx (a) WHERE a > 0);
-INSERT INTO t(a) VALUES (5);
-SELECT * FROM t@idx WHERE a > 0;
-~~~
-
-~~~
-CREATE TABLE
-
-Time: 13ms total (execution 12ms / network 0ms)
-
-INSERT 1
-
-Time: 22ms total (execution 21ms / network 0ms)
-
- a
------
- 5
-(1 row)
-
-Time: 1ms total (execution 1ms / network 0ms)
-~~~
-
-##### Force partial GIN index scan
-
-To force a [partial GIN index](inverted-indexes.html#partial-gin-indexes) scan, your statement must have a `WHERE` clause that:
-
-- Implies the partial index.
-- Constrains the GIN index scan.
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-DROP TABLE t;
-CREATE TABLE t (
- j JSON,
- INVERTED INDEX idx (j) WHERE j->'a' = '1');
-INSERT INTO t(j)
- VALUES ('{"a": 1}'),
- ('{"a": 3, "b": 2}'),
- ('{"a": 1, "b": 2}');
-SELECT * FROM t@idx WHERE j->'a' = '1' AND j->'b' = '2';
-~~~
-
-~~~
-DROP TABLE
-
-Time: 68ms total (execution 22ms / network 45ms)
-
-CREATE TABLE
-
-Time: 10ms total (execution 10ms / network 0ms)
-
-INSERT 3
-
-Time: 22ms total (execution 22ms / network 0ms)
-
- j
---------------------
- {"a": 1, "b": 2}
-(1 row)
-
-Time: 1ms total (execution 1ms / network 0ms)
-~~~
-
-##### Prevent full scan
-
-To prevent the optimizer from planning a full scan for a table, specify the `NO_FULL_SCAN` index hint. For example:
-
-~~~sql
-SELECT * FROM table_name@{NO_FULL_SCAN};
-~~~
-
-To prevent a full scan of a [partial index](#force-partial-index-scan), you must specify `NO_FULL_SCAN` _in combination with_ the partial index using `FORCE_INDEX=index_name`.
-If you specify only `NO_FULL_SCAN`, a full scan of a partial index may be planned.
diff --git a/src/current/_includes/v22.1/misc/gce-locations.md b/src/current/_includes/v22.1/misc/gce-locations.md
deleted file mode 100644
index 22122aae78d..00000000000
--- a/src/current/_includes/v22.1/misc/gce-locations.md
+++ /dev/null
@@ -1,18 +0,0 @@
-| Location | SQL Statement |
-| ------ | ------ |
-| us-east1 (South Carolina) | `INSERT into system.locations VALUES ('region', 'us-east1', 33.836082, -81.163727)` |
-| us-east4 (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east4', 37.478397, -76.453077)` |
-| us-central1 (Iowa) | `INSERT into system.locations VALUES ('region', 'us-central1', 42.032974, -93.581543)` |
-| us-west1 (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west1', 43.804133, -120.554201)` |
-| northamerica-northeast1 (Montreal) | `INSERT into system.locations VALUES ('region', 'northamerica-northeast1', 56.130366, -106.346771)` |
-| europe-west1 (Belgium) | `INSERT into system.locations VALUES ('region', 'europe-west1', 50.44816, 3.81886)` |
-| europe-west2 (London) | `INSERT into system.locations VALUES ('region', 'europe-west2', 51.507351, -0.127758)` |
-| europe-west3 (Frankfurt) | `INSERT into system.locations VALUES ('region', 'europe-west3', 50.110922, 8.682127)` |
-| europe-west4 (Netherlands) | `INSERT into system.locations VALUES ('region', 'europe-west4', 53.4386, 6.8355)` |
-| europe-west6 (Zürich) | `INSERT into system.locations VALUES ('region', 'europe-west6', 47.3769, 8.5417)` |
-| asia-east1 (Taiwan) | `INSERT into system.locations VALUES ('region', 'asia-east1', 24.0717, 120.5624)` |
-| asia-northeast1 (Tokyo) | `INSERT into system.locations VALUES ('region', 'asia-northeast1', 35.689487, 139.691706)` |
-| asia-southeast1 (Singapore) | `INSERT into system.locations VALUES ('region', 'asia-southeast1', 1.352083, 103.819836)` |
-| australia-southeast1 (Sydney) | `INSERT into system.locations VALUES ('region', 'australia-southeast1', -33.86882, 151.209296)` |
-| asia-south1 (Mumbai) | `INSERT into system.locations VALUES ('region', 'asia-south1', 19.075984, 72.877656)` |
-| southamerica-east1 (São Paulo) | `INSERT into system.locations VALUES ('region', 'southamerica-east1', -23.55052, -46.633309)` |
diff --git a/src/current/_includes/v22.1/misc/geojson_geometry_note.md b/src/current/_includes/v22.1/misc/geojson_geometry_note.md
deleted file mode 100644
index ba5fe199657..00000000000
--- a/src/current/_includes/v22.1/misc/geojson_geometry_note.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-The screenshots in these examples were generated using [geojson.io](http://geojson.io), but they are designed to showcase the shapes, not the map. Representing `GEOMETRY` data in GeoJSON can lead to unexpected results if using geometries with [SRIDs](spatial-glossary.html#srid) other than 4326 (as shown below).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/misc/haproxy.md b/src/current/_includes/v22.1/misc/haproxy.md
deleted file mode 100644
index 375af8e937d..00000000000
--- a/src/current/_includes/v22.1/misc/haproxy.md
+++ /dev/null
@@ -1,39 +0,0 @@
-By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly:
-
- ~~~
- global
- maxconn 4096
-
- defaults
- mode tcp
- # Timeout values should be configured for your specific use.
- # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect
- timeout connect 10s
- timeout client 1m
- timeout server 1m
- # TCP keep-alive on client side. Server already enables them.
- option clitcpka
-
- listen psql
- bind :26257
- mode tcp
- balance roundrobin
- option httpchk GET /health?ready=1
- server cockroach1 :26257 check port 8080
- server cockroach2 :26257 check port 8080
- server cockroach3 :26257 check port 8080
- ~~~
-
- The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster:
-
- Field | Description
- ------|------------
- `timeout connect` `timeout client` `timeout server` | Timeout values that should be suitable for most deployments.
- `bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.
This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node.
- `balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms.
- `option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests.
- `server` | For each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the [`--advertise-addr` flag](cockroach-start.html#networking) on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy.
-
- {{site.data.alerts.callout_info}}
- For full details on these and other configuration settings, see the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html).
- {{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/misc/htpp-import-only.md b/src/current/_includes/v22.1/misc/htpp-import-only.md
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/src/current/_includes/v22.1/misc/import-perf.md b/src/current/_includes/v22.1/misc/import-perf.md
deleted file mode 100644
index b0520a9c392..00000000000
--- a/src/current/_includes/v22.1/misc/import-perf.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_success}}
-For best practices for optimizing import performance in CockroachDB, see [Import Performance Best Practices](import-performance-best-practices.html).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/misc/index-storage-parameters.md b/src/current/_includes/v22.1/misc/index-storage-parameters.md
deleted file mode 100644
index 174b50e8bf2..00000000000
--- a/src/current/_includes/v22.1/misc/index-storage-parameters.md
+++ /dev/null
@@ -1,14 +0,0 @@
-| Parameter name | Description | Data type | Default value
-|---------------------+----------------------|-----|------|
-| `bucket_count` | The number of buckets into which a [hash-sharded index](hash-sharded-indexes.html) will split. | Integer | The value of the `sql.defaults.default_hash_sharded_index_bucket_count` [cluster setting](cluster-settings.html). |
-| `geometry_max_x` | The maximum X-value of the [spatial reference system](spatial-glossary.html#spatial-reference-system) for the object(s) being covered. This only needs to be set if you are using a custom [SRID](spatial-glossary.html#srid). | | Derived from SRID bounds, else `(1 << 31) -1`. |
-| `geometry_max_y` | The maximum Y-value of the [spatial reference system](spatial-glossary.html#spatial-reference-system) for the object(s) being covered. This only needs to be set if you are using a custom [SRID](spatial-glossary.html#srid). | | Derived from SRID bounds, else `(1 << 31) -1`. |
-| `geometry_min_x` | The minimum X-value of the [spatial reference system](spatial-glossary.html#spatial-reference-system) for the object(s) being covered. This only needs to be set if the default bounds of the SRID are too large/small for the given data, or SRID = 0 and you wish to use a smaller range (unfortunately this is currently not exposed, but is viewable on ). By default, SRID = 0 assumes `[-min int32, max int32]` ranges. | | Derived from SRID bounds, else `-(1 << 31)`. |
-| `geometry_min_y` | The minimum Y-value of the [spatial reference system](spatial-glossary.html#spatial-reference-system) for the object(s) being covered. This only needs to be set if you are using a custom [SRID](spatial-glossary.html#srid). | | Derived from SRID bounds, else `-(1 << 31)`. |
-| `s2_level_mod` | `s2_max_level` must be divisible by `s2_level_mod`. `s2_level_mod` must be between `1` and `3`. | Integer | `1` |
-| `s2_max_cells` | The maximum number of S2 cells used in the covering. Provides a limit on how much work is done exploring the possible coverings. Allowed values: `1-30`. You may want to use higher values for odd-shaped regions such as skinny rectangles. Used in [spatial indexes](spatial-indexes.html). | Integer | `4` |
-| `s2_max_level` | The maximum level of S2 cell used in the covering. Allowed values: `1-30`. Setting it to less than the default means that CockroachDB will be forced to generate coverings using larger cells. Used in [spatial indexes](spatial-indexes.html). | Integer | `30` |
-
-The following parameters are included for PostgreSQL compatibility and do not affect how CockroachDB runs:
-
-- `fillfactor`
diff --git a/src/current/_includes/v22.1/misc/install-next-steps.html b/src/current/_includes/v22.1/misc/install-next-steps.html
deleted file mode 100644
index bb7a9ebc388..00000000000
--- a/src/current/_includes/v22.1/misc/install-next-steps.html
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
If you're just getting started with CockroachDB:
-
The CockroachDB binary for Linux requires glibc, libncurses, and tzdata, which are found by default on nearly all Linux distributions, with Alpine as the notable exception.
diff --git a/src/current/_includes/v22.1/misc/logging-defaults.md b/src/current/_includes/v22.1/misc/logging-defaults.md
deleted file mode 100644
index 1a7ae68a536..00000000000
--- a/src/current/_includes/v22.1/misc/logging-defaults.md
+++ /dev/null
@@ -1,3 +0,0 @@
-By default, this command logs messages to `stderr`. This includes events with `WARNING` [severity](logging.html#logging-levels-severities) and higher.
-
-If you need to troubleshoot this command's behavior, you can [customize its logging behavior](configure-logs.html).
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/misc/logging-flags.md b/src/current/_includes/v22.1/misc/logging-flags.md
deleted file mode 100644
index eaadb6c8ddb..00000000000
--- a/src/current/_includes/v22.1/misc/logging-flags.md
+++ /dev/null
@@ -1,11 +0,0 @@
-Flag | Description
------|------------
-`--log` | Configure logging parameters by specifying a YAML payload. For details, see [Configure logs](configure-logs.html#flag). If a YAML configuration is not specified, the [default configuration](configure-logs.html#default-logging-configuration) is used.
`--log-config-file` can also be used.
**Note:** The deprecated logging flags below cannot be combined with `--log`, and can be defined instead in the YAML payload.
-`--log-config-file` | Configure logging parameters by specifying a path to a YAML file. For details, see [Configure logs](configure-logs.html#flag). If a YAML configuration is not specified, the [default configuration](configure-logs.html#default-logging-configuration) is used.
`--log` can also be used.
**Note:** The deprecated logging flags below cannot be combined with `--log-config-file`, and can be defined instead in the YAML payload.
-`--log-dir` | **Deprecated.** To enable logging to files and write logs to the specified directory, use [`--log`](configure-logs.html#flag) and set `dir` in the YAML configuration.
Setting `--log-dir` to a blank directory (`--log-dir=`) disables logging to files. Do not use `--log-dir=""`; this creates a new directory named `""` and stores log files in that directory.
-`--log-group-max-size` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. After the logging group (i.e., `cockroach`, `cockroach-sql-audit`, `cockroach-auth`, `cockroach-sql-exec`, `cockroach-pebble`) reaches the specified size, delete the oldest log file. The flag's argument takes standard file sizes, such as `--log-group-max-size=1GiB`.
**Default**: 100MiB
-`--log-file-max-size` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. After logs reach the specified size, begin writing logs to a new file. The flag's argument takes standard file sizes, such as `--log-file-max-size=2MiB`.
**Default**: 10MiB
-`--log-file-verbosity` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. Only writes messages to log files if they are at or above the specified [severity level](logging.html#logging-levels-severities), such as `--log-file-verbosity=WARNING`. **Requires** logging to files.
**Default**: `INFO`
-`--logtostderr` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. Enable logging to `stderr` for messages at or above the specified [severity level](logging.html#logging-levels-severities), such as `--logtostderr=ERROR`
If you use this flag without specifying the severity level (e.g., `cockroach start --logtostderr`), it prints messages of *all* severities to `stderr`.
Setting `--logtostderr=NONE` disables logging to `stderr`.
-`--no-color` | Do not colorize `stderr`. Possible values: `true` or `false`.
When set to `false`, messages logged to `stderr` are colorized based on [severity level](logging.html#logging-levels-severities).
**Default:** `false`
-`--sql-audit-dir` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. If non-empty, output the `SENSITIVE_ACCESS` [logging channel](logging-overview.html#logging-channels) to this directory.
Note that enabling `SENSITIVE_ACCESS` logs can negatively impact performance. As a result, we recommend using the `SENSITIVE_ACCESS` channel for security purposes only. For more information, see [Logging use cases](logging-use-cases.html#security-and-audit-monitoring).
diff --git a/src/current/_includes/v22.1/misc/movr-live-demo.md b/src/current/_includes/v22.1/misc/movr-live-demo.md
deleted file mode 100644
index f8cfb24cb21..00000000000
--- a/src/current/_includes/v22.1/misc/movr-live-demo.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_success}}
-For a live demo of the deployed example application, see [https://movr.cloud](https://movr.cloud).
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/misc/movr-schema.md b/src/current/_includes/v22.1/misc/movr-schema.md
deleted file mode 100644
index 3d9d1f77327..00000000000
--- a/src/current/_includes/v22.1/misc/movr-schema.md
+++ /dev/null
@@ -1,12 +0,0 @@
-The six tables in the `movr` database store user, vehicle, and ride data for MovR:
-
-Table | Description
---------|----------------------------
-`users` | People registered for the service.
-`vehicles` | The pool of vehicles available for the service.
-`rides` | When and where users have rented a vehicle.
-`promo_codes` | Promotional codes for users.
-`user_promo_codes` | Promotional codes in use by users.
-`vehicle_location_histories` | Vehicle location history.
-
-
diff --git a/src/current/_includes/v22.1/misc/movr-workflow.md b/src/current/_includes/v22.1/misc/movr-workflow.md
deleted file mode 100644
index 948d95dc1de..00000000000
--- a/src/current/_includes/v22.1/misc/movr-workflow.md
+++ /dev/null
@@ -1,76 +0,0 @@
-The workflow for MovR is as follows:
-
-1. A user loads the app and sees the 25 closest vehicles.
-
- For example:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SELECT id, city, status FROM vehicles WHERE city='amsterdam' limit 25;
- ~~~
-
-2. The user signs up for the service.
-
- For example:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO users (id, name, address, city, credit_card)
- VALUES ('66666666-6666-4400-8000-00000000000f', 'Mariah Lam', '88194 Angela Gardens Suite 60', 'amsterdam', '123245696');
- ~~~
-
- {{site.data.alerts.callout_info}}Usually for Universally Unique Identifier (UUID) you would need to generate it automatically but for the sake of this follow up we will use predetermined UUID to keep track of them in our examples.{{site.data.alerts.end}}
-
-3. In some cases, the user adds their own vehicle to share.
-
- For example:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO vehicles (id, city, type, owner_id,creation_time,status, current_location, ext)
- VALUES ('ffffffff-ffff-4400-8000-00000000000f', 'amsterdam', 'skateboard', '66666666-6666-4400-8000-00000000000f', current_timestamp(), 'available', '88194 Angela Gardens Suite 60', '{"color": "blue"}');
- ~~~
-4. More often, the user reserves a vehicle and starts a ride, applying a promo code, if available and valid.
-
- For example:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SELECT code FROM user_promo_codes WHERE user_id ='66666666-6666-4400-8000-00000000000f';
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > UPDATE vehicles SET status = 'in_use' WHERE id='bbbbbbbb-bbbb-4800-8000-00000000000b';
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO rides (id, city, vehicle_city, rider_id, vehicle_id, start_address,end_address, start_time, end_time, revenue)
- VALUES ('cd032f56-cf1a-4800-8000-00000000066f', 'amsterdam', 'amsterdam', '66666666-6666-4400-8000-00000000000f', 'bbbbbbbb-bbbb-4800-8000-00000000000b', '70458 Mary Crest', '', TIMESTAMP '2020-10-01 10:00:00.123456', NULL, 0.0);
- ~~~
-
-5. During the ride, MovR tracks the location of the vehicle.
-
- For example:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO vehicle_location_histories (city, ride_id, timestamp, lat, long)
- VALUES ('amsterdam', 'cd032f56-cf1a-4800-8000-00000000066f', current_timestamp(), -101, 60);
- ~~~
-
-6. The user ends the ride and releases the vehicle.
-
- For example:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > UPDATE vehicles SET status = 'available' WHERE id='bbbbbbbb-bbbb-4800-8000-00000000000b';
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > UPDATE rides SET end_address ='33862 Charles Junctions Apt. 49', end_time=TIMESTAMP '2020-10-01 10:30:00.123456', revenue=88.6
- WHERE id='cd032f56-cf1a-4800-8000-00000000066f';
- ~~~
diff --git a/src/current/_includes/v22.1/misc/multiregion-max-offset.md b/src/current/_includes/v22.1/misc/multiregion-max-offset.md
deleted file mode 100644
index 07a0dab59c3..00000000000
--- a/src/current/_includes/v22.1/misc/multiregion-max-offset.md
+++ /dev/null
@@ -1 +0,0 @@
-For new clusters using the [multi-region SQL abstractions](multiregion-overview.html), Cockroach Labs recommends lowering the [`--max-offset`](cockroach-start.html#flags-max-offset) setting to `250ms`. This setting is especially helpful for lowering the write latency of [global tables](multiregion-overview.html#global-tables). For existing clusters, changing the setting will require restarting all of the nodes in your cluster at the same time; it cannot be done with a rolling restart.
diff --git a/src/current/_includes/v22.1/misc/non-http-source-privileges.md b/src/current/_includes/v22.1/misc/non-http-source-privileges.md
deleted file mode 100644
index dfea2d411e2..00000000000
--- a/src/current/_includes/v22.1/misc/non-http-source-privileges.md
+++ /dev/null
@@ -1,12 +0,0 @@
-The source file URL does **not** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios:
-
-- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default.
-- [Userfile](use-userfile-for-bulk-operations.html)
-
-The source file URL **does** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios:
-
-- S3 or GS using `IMPLICIT` credentials
-- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3
-- [Nodelocal](cockroach-nodelocal-upload.html)
-
-We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html).
diff --git a/src/current/_includes/v22.1/misc/remove-user-callout.html b/src/current/_includes/v22.1/misc/remove-user-callout.html
deleted file mode 100644
index 925f83d779d..00000000000
--- a/src/current/_includes/v22.1/misc/remove-user-callout.html
+++ /dev/null
@@ -1 +0,0 @@
-Removing a user does not remove that user's privileges. Therefore, to prevent a future user with an identical username from inheriting an old user's privileges, it's important to revoke a user's privileges before or after removing the user.
diff --git a/src/current/_includes/v22.1/misc/s3-compatible-warning.md b/src/current/_includes/v22.1/misc/s3-compatible-warning.md
deleted file mode 100644
index 1e12b5611d3..00000000000
--- a/src/current/_includes/v22.1/misc/s3-compatible-warning.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-While Cockroach Labs actively tests Amazon S3, Google Cloud Storage, and Azure Storage, we **do not** test [S3-compatible services](use-cloud-storage-for-bulk-operations.html) (e.g., [MinIO](https://min.io/), [Red Hat Ceph](https://docs.ceph.com/en/pacific/radosgw/s3/)).
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/misc/schema-change-stmt-note.md b/src/current/_includes/v22.1/misc/schema-change-stmt-note.md
deleted file mode 100644
index 576fa59a39c..00000000000
--- a/src/current/_includes/v22.1/misc/schema-change-stmt-note.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-The `{{ page.title }}` statement performs a schema change. For more information about how online schema changes work in CockroachDB, see [Online Schema Changes](online-schema-changes.html).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/misc/schema-change-view-job.md b/src/current/_includes/v22.1/misc/schema-change-view-job.md
deleted file mode 100644
index 8861174d621..00000000000
--- a/src/current/_includes/v22.1/misc/schema-change-view-job.md
+++ /dev/null
@@ -1 +0,0 @@
-This schema change statement is registered as a job. You can view long-running jobs with [`SHOW JOBS`](show-jobs.html).
diff --git a/src/current/_includes/v22.1/misc/session-vars.md b/src/current/_includes/v22.1/misc/session-vars.md
deleted file mode 100644
index 4726ecdfa83..00000000000
--- a/src/current/_includes/v22.1/misc/session-vars.md
+++ /dev/null
@@ -1,82 +0,0 @@
-| Variable name | Description | Initial value | Modify with [`SET`](set-vars.html)? | View with [`SHOW`](show-vars.html)? |
-|---|---|---|---|---|
-| `application_name` | The current application name for statistics collection. | Empty string, or `cockroach` for sessions from the [built-in SQL client](cockroach-sql.html). | Yes | Yes |
-| `bytea_output` | The [mode for conversions from `STRING` to `BYTES`](bytes.html#supported-conversions). | hex | Yes | Yes |
-| `client_min_messages` | The severity level of notices displayed in the [SQL shell](cockroach-sql.html). Accepted values include `debug5`, `debug4`, `debug3`, `debug2`, `debug1`, `log`, `notice`, `warning`, and `error`. | `notice` | Yes | Yes |
-| `crdb_version` | The version of CockroachDB. | CockroachDB OSS version | No | Yes |
-| `database` | The [current database](sql-name-resolution.html#current-database). | Database in connection string, or empty if not specified. | Yes | Yes |
-| `datestyle` | The input string format for [`DATE`](date.html) and [`TIMESTAMP`](timestamp.html) values. Accepted values include `ISO,MDY`, `ISO,DMY`, and `ISO,YMD`. | The value set by the `sql.defaults.datestyle` [cluster setting](cluster-settings.html) (`ISO,MDY`, by default). | Yes | Yes |
-| `default_int_size` | The size, in bytes, of an [`INT`](int.html) type. | `8` | Yes | Yes |
-| `default_transaction_isolation` | All transactions execute with `SERIALIZABLE` isolation. See [Transactions: Isolation levels](transactions.html#isolation-levels). | `SERIALIZABLE` | No | Yes |
-| `default_transaction_priority` | The default transaction priority for the current session. The supported options are `low`, `normal`, and `high`. | `normal` | Yes | Yes |
-| `default_transaction_quality_of_service` | **New in v22.1:** The default transaction quality of service for the current session. The supported options are `regular`, `critical`, and `background`. See [Set quality of service level](admission-control.html#set-quality-of-service-level-for-a-session). | `regular` | Yes | Yes |
-| `default_transaction_read_only` | The default transaction access mode for the current session. If set to `on`, only read operations are allowed in transactions in the current session; if set to `off`, both read and write operations are allowed. See [`SET TRANSACTION`](set-transaction.html) for more details. | `off` | Yes | Yes |
-| `default_transaction_use_follower_reads` | If set to on, all read-only transactions use [`AS OF SYSTEM TIME follower_read_timestamp()`](as-of-system-time.html) to allow the transaction to use follower reads. If set to `off`, read-only transactions will only use follower reads if an `AS OF SYSTEM TIME` clause is specified in the statement, with an interval of at least 4.8 seconds. | `off` | Yes | Yes |
-| `disallow_full_table_scans` | If set to `on`, all queries that have planned a full table or full secondary index scan will return an error message. This setting does not apply to internal queries, which may plan full table or index scans without checking the session variable. | `off` | Yes | Yes |
-| `distsql` | The query distribution mode for the session. By default, CockroachDB determines which queries are faster to execute if distributed across multiple nodes, and all other queries are run through the gateway node. | `auto` | Yes | Yes |
-| `enable_implicit_select_for_update` | Indicates whether [`UPDATE`](update.html) and [`UPSERT`](upsert.html) statements acquire locks using the `FOR UPDATE` locking mode during their initial row scan, which improves performance for contended workloads.
For more information about how `FOR UPDATE` locking works, see the documentation for [`SELECT FOR UPDATE`](select-for-update.html). | `on` | Yes | Yes |
-| `enable_insert_fast_path` | Indicates whether CockroachDB will use a specialized execution operator for inserting into a table. We recommend leaving this setting `on`. | `on` | Yes | Yes |
-| `enable_implicit_transaction_for_batch_statements` | Indicates whether multiple statements in a single query (a "batch statement") will all run in the same implicit transaction, which matches the PostgreSQL wire protocol. | `off` | Yes | Yes |
-| `enable_zigzag_join` | Indicates whether the [cost-based optimizer](cost-based-optimizer.html) will plan certain queries using a zig-zag merge join algorithm, which searches for the desired intersection by jumping back and forth between the indexes based on the fact that after constraining indexes, they share an ordering. | `on` | Yes | Yes |
-| `extra_float_digits` | The number of digits displayed for floating-point values. Only values between `-15` and `3` are supported. | `0` | Yes | Yes |
-| `force_savepoint_restart` | When set to `true`, allows the [`SAVEPOINT`](savepoint.html) statement to accept any name for a savepoint. | `off` | Yes | Yes |
-| `foreign_key_cascades_limit` | Limits the number of [cascading operations](foreign-key.html#use-a-foreign-key-constraint-with-cascade) that run as part of a single query. | `10000` | Yes | Yes |
-| `idle_in_session_timeout` | Automatically terminates sessions that idle past the specified threshold.
When set to `0`, the session will not timeout. | The value set by the `sql.defaults.idle_in_session_timeout` [cluster setting](cluster-settings.html) (`0s`, by default). | Yes | Yes |
-| `idle_in_transaction_session_timeout` | Automatically terminates sessions that are idle in a transaction past the specified threshold.
When set to `0`, the session will not timeout. | The value set by the `sql.defaults.idle_in_transaction_session_timeout` [cluster setting](cluster-settings.html) (0s, by default). | Yes | Yes |
-| `index_recommendations_enabled` | **New in v22.1:** If `true`, display recommendations to create indexes required to eliminate full table scans. For more details, see [Default statement plans](explain.html#default-statement-plans). | `true` | Yes | Yes |
-| `inject_retry_errors_enabled` | **New in v22.1:** If `true`, any statement executed inside of an explicit transaction (with the exception of [`SET`](set-vars.html) statements) will return a transaction retry error. If the client retries the transaction using the special [`cockroach_restart SAVEPOINT` name](savepoint.html#savepoints-for-client-side-transaction-retries), after the 3rd retry error, the transaction will proceed as normal. Otherwise, the errors will continue until `inject_retry_errors_enabled` is set to `false`. For more details, see [Testing transaction retry logic](transactions.html#testing-transaction-retry-logic). | `false` | Yes | Yes |
-| `intervalstyle` | The input string format for [`INTERVAL`](interval.html) values. Accepted values include `postgres`, `iso_8601`, and `sql_standard`. | The value set by the `sql.defaults.intervalstyle` [cluster setting](cluster-settings.html) (`postgres`, by default). | Yes | Yes |
-| `is_superuser` | If `on` or `true`, the current user is a member of the [`admin` role](security-reference/authorization.html#admin-role). | User-dependent | No | Yes |
-| `large_full_scan_rows` | Determines which tables are considered "large" such that `disallow_full_table_scans` rejects full table or index scans of "large" tables. The default value is `1000`. To reject all full table or index scans, set to `0`. | User-dependent | No | Yes |
-| `locality` | The location of the node. For more information, see [Locality](cockroach-start.html#locality). | Node-dependent | No | Yes |
-| `lock_timeout` | The amount of time a query can spend acquiring or waiting for a single [row-level lock](architecture/transaction-layer.html#concurrency-control). In CockroachDB, unlike in PostgreSQL, non-locking reads wait for conflicting locks to be released. As a result, the `lock_timeout` configuration applies to writes, and to locking and non-locking reads in read-write and read-only transactions. If `lock_timeout = 0`, queries do not timeout due to lock acquisitions. | The value set by the `sql.defaults.lock_timeout` [cluster setting](cluster-settings.html) (`0`, by default) | Yes | Yes |
-| `node_id` | The ID of the node currently connected to.
This variable is particularly useful for verifying load balanced connections. | Node-dependent | No | Yes |
-| `null_ordered_last` | **New in v22.1:** Set the default ordering of `NULL`s. The default order is `NULL`s first for ascending order and `NULL`s last for descending order. | `false` | Yes | Yes |
-| `optimizer_use_histograms` | If `on`, the optimizer uses collected histograms for cardinality estimation. | `on` | No | Yes |
-| `optimizer_use_multicol_stats` | If `on`, the optimizer uses collected multi-column statistics for cardinality estimation. | `on` | No | Yes |
-| `prefer_lookup_joins_for_fks` | If `on`, the optimizer prefers [`lookup joins`](joins.html#lookup-joins) to [`merge joins`](joins.html#merge-joins) when performing [`foreign key`](foreign-key.html) checks. | `off` | Yes | Yes |
-| `reorder_joins_limit` | Maximum number of joins that the optimizer will attempt to reorder when searching for an optimal query execution plan.
For more information, see [Join reordering](cost-based-optimizer.html#join-reordering). | `8` | Yes | Yes |
-| `results_buffer_size` | The default size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client. This can also be set for all connections using the `sql.defaults.results_buffer_size` [cluster setting](cluster-settings.html). Note that auto-retries generally only happen while no results have been delivered to the client, so reducing this size can increase the number of retryable errors a client receives. On the other hand, increasing the buffer size can increase the delay until the client receives the first result row. Setting to 0 disables any buffering. | `16384` | Yes | Yes |
-| `require_explicit_primary_keys` | If `on`, CockroachDB throws on error for all tables created without an explicit primary key defined. | `off` | Yes | Yes |
-| `search_path` | A list of schemas that will be searched to resolve unqualified table or function names. For more details, see [SQL name resolution](sql-name-resolution.html). | `public` | Yes | Yes |
-| `serial_normalization` | Specifies the default handling of [`SERIAL`](serial.html) in table definitions. Valid options include `'rowid'`, `'virtual_sequence'`, `sql_sequence`, `sql_sequence_cached`, and `unordered_rowid`. If set to `'virtual_sequence'`, the `SERIAL` type auto-creates a sequence for [better compatibility with Hibernate sequences](https://forum.cockroachlabs.com/t/hibernate-sequence-generator-returns-negative-number-and-ignore-unique-rowid/1885). If set to `sql_sequence_cached`, you can use the `sql.defaults.serial_sequences_cache_size` [cluster setting](cluster-settings.html) to control the number of values to cache in a user's session, with a default of 256. If set to `unordered_rowid`, the `SERIAL` type generates a globally unique 64-bit integer (a combination of the insert timestamp and the ID of the node executing the statement) that does not have unique ordering. | `'rowid'` | Yes | Yes |
-| `server_version` | The version of PostgreSQL that CockroachDB emulates. | Version-dependent | No | Yes |
-| `server_version_num` | The version of PostgreSQL that CockroachDB emulates. | Version-dependent | Yes | Yes |
-| `session_id` | The ID of the current session. | Session-dependent | No | Yes |
-| `session_user` | The user connected for the current session. | User in connection string | No | Yes |
-| `sql_safe_updates` | If `false`, potentially unsafe SQL statements are allowed, including `DROP` of a non-empty database and all dependent objects, [`DELETE`](delete.html) without a `WHERE` clause, [`UPDATE`](update.html) without a `WHERE` clause, and [`ALTER TABLE .. DROP COLUMN`](drop-column.html). See Allow [Potentially Unsafe SQL Statements](cockroach-sql.html#allow-potentially-unsafe-sql-statements) for more details. | `true` for interactive sessions from the [built-in SQL client](cockroach-sql.html), `false` for sessions from other clients | Yes | Yes |
-| `statement_timeout` | The amount of time a statement can run before being stopped. This value can be an `int` (e.g., `10`) and will be interpreted as milliseconds. It can also be an interval or string argument, where the string can be parsed as a valid interval (e.g., `'4s'`). A value of `0` turns it off. | The value set by the `sql.defaults.statement_timeout` [cluster setting](cluster-settings.html) (`0s`, by default). | Yes | Yes |
-| `stub_catalog_tables` | If `off`, querying an unimplemented, empty [`pg_catalog`](pg-catalog.html) table will result in an error, as is the case in v20.2 and earlier. If `on`, querying an unimplemented, empty `pg_catalog` table simply returns no rows. | `on` | Yes | Yes |
-| `timezone` | The default time zone for the current session. This session variable was named `"time zone"` (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `UTC` | Yes | Yes |
-| `tracing` | The trace recording state. | `off` | | Yes |
-| `transaction_isolation` | All transactions execute with `SERIALIZABLE` isolation. See [Transactions: Isolation levels](transactions.html#isolation-levels). This session variable was called `transaction isolation level` (with spaces) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `SERIALIZABLE` | No | Yes |
-| `transaction_priority` | The priority of the current transaction. See Transactions: Transaction priorities for more details. This session variable was called transaction priority (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `NORMAL` | Yes | Yes |
-| `transaction_read_only` | The access mode of the current transaction. See [`SET TRANSACTION`](set-transaction.html) for more details. | `off` | Yes | Yes |
-| `transaction_rows_read_err` | The limit for the number of rows read by a SQL transaction. If this value is exceeded the transaction will fail (or the event will be logged to `SQL_INTERNAL_PERF` for internal transactions). | `0` | Yes | Yes |
-| `transaction_rows_read_log` | The threshold for the number of rows read by a SQL transaction. If this value is exceeded, the event will be logged to `SQL_PERF` (or `SQL_INTERNAL_PERF` for internal transactions). | `0` | Yes | Yes |
-| `transaction_rows_written_err` | The limit for the number of rows written by a SQL transaction. If this value is exceeded the transaction will fail (or the event will be logged to `SQL_INTERNAL_PERF` for internal transactions). | `0` | Yes | Yes |
-| `transaction_rows_written_log` | The threshold for the number of rows written by a SQL transaction. If this value is exceeded, the event will be logged to `SQL_PERF` (or `SQL_INTERNAL_PERF` for internal transactions). | `0` | Yes | Yes |
-| `transaction_status` | The state of the current transaction. See [Transactions](transactions.html) for more details. This session variable was called `transaction status` (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `NoTxn` | No | Yes |
-| `troubleshooting_mode_enabled` | When enabled, avoid performing additional work on queries, such as collecting and emitting telemetry data. This session variable is particularly useful when the cluster is experiencing issues, unavailability, or failure. | `off` | Yes | Yes |
-| `use_declarative_schema_changer` | Whether to use the declarative schema changer for supported statements. See [Declarative schema changer](online-schema-changes.html#declarative-schema-changer) for more details. | `on` | Yes | Yes |
-| `vectorize` | The vectorized execution engine mode. Options include `on` and `off`. For more details, see [Configure vectorized execution for CockroachDB](vectorized-execution.html#configure-vectorized-execution). | `on` | Yes | Yes |
-
-The following session variables are exposed only for backwards compatibility with earlier CockroachDB releases and have no impact on how CockroachDB runs:
-
-| Variable name | Initial value | Modify with [`SET`](set-vars.html)? | View with [`SHOW`](show-vars.html)? |
-|---|---|---|---|
-| `backslash_quote` | `safe_encoding` | No | Yes |
-| `client_encoding` | `UTF8` | No | Yes |
-| `default_tablespace` | | No | Yes |
-| `enable_drop_enum_value` | `off` | Yes | Yes |
-| `enable_seqscan` | `on` | Yes | Yes |
-| `escape_string_warning` | `on` | No | Yes |
-| `experimental_enable_hash_sharded_indexes` | `off` | Yes | Yes |
-| `integer_datetimes` | `on` | No | Yes |
-| `max_identifier_length` | `128` | No | Yes |
-| `max_index_keys` | `32` | No | Yes |
-| `row_security` | `off` | No | Yes |
-| `standard_conforming_strings` | `on` | No | Yes |
-| `server_encoding` | `UTF8` | Yes | Yes |
-| `synchronize_seqscans` | `on` | No | Yes |
-| `synchronous_commit` | `on` | Yes | Yes |
diff --git a/src/current/_includes/v22.1/misc/set-enterprise-license.md b/src/current/_includes/v22.1/misc/set-enterprise-license.md
deleted file mode 100644
index 55d71273c32..00000000000
--- a/src/current/_includes/v22.1/misc/set-enterprise-license.md
+++ /dev/null
@@ -1,16 +0,0 @@
-As the CockroachDB `root` user, open the [built-in SQL shell](cockroach-sql.html) in insecure or secure mode, as per your CockroachDB setup. In the following example, we assume that CockroachDB is running in insecure mode. Then use the [`SET CLUSTER SETTING`](set-cluster-setting.html) command to set the name of your organization and the license key:
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING cluster.organization = 'Acme Company';
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING enterprise.license = 'xxxxxxxxxxxx';
-~~~
diff --git a/src/current/_includes/v22.1/misc/sorting-delete-output.md b/src/current/_includes/v22.1/misc/sorting-delete-output.md
deleted file mode 100644
index a67c7cb3229..00000000000
--- a/src/current/_includes/v22.1/misc/sorting-delete-output.md
+++ /dev/null
@@ -1,9 +0,0 @@
-To sort the output of a `DELETE` statement, use:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> WITH a AS (DELETE ... RETURNING ...)
- SELECT ... FROM a ORDER BY ...
-~~~
-
-For an example, see [Sort and return deleted rows](delete.html#sort-and-return-deleted-rows).
diff --git a/src/current/_includes/v22.1/misc/source-privileges.md b/src/current/_includes/v22.1/misc/source-privileges.md
deleted file mode 100644
index 135a153b83f..00000000000
--- a/src/current/_includes/v22.1/misc/source-privileges.md
+++ /dev/null
@@ -1,12 +0,0 @@
-The source file URL does _not_ require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios:
-
-- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default.
-- [Userfile](use-userfile-for-bulk-operations.html)
-
-The source file URL _does_ require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios:
-
-- S3 or GS using `IMPLICIT` credentials
-- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3
-- [Nodelocal](cockroach-nodelocal-upload.html), [HTTP](use-a-local-file-server-for-bulk-operations.html) or [HTTPS] (use-a-local-file-server-for-bulk-operations.html)
-
-We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html).
diff --git a/src/current/_includes/v22.1/misc/storage-class-glacier-incremental.md b/src/current/_includes/v22.1/misc/storage-class-glacier-incremental.md
deleted file mode 100644
index 92d1f6cf90d..00000000000
--- a/src/current/_includes/v22.1/misc/storage-class-glacier-incremental.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-[Incremental backups](take-full-and-incremental-backups.html#incremental-backups) are **not** compatible with the S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes. Incremental backups require ad-hoc reading of previous backups. The Glacier Flexible Retrieval or Glacier Deep Archive storage classes do not allow immediate access to S3 objects without first restoring the objects. See Amazon's documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for more detail.
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/misc/storage-classes.md b/src/current/_includes/v22.1/misc/storage-classes.md
deleted file mode 100644
index c4dafce941e..00000000000
--- a/src/current/_includes/v22.1/misc/storage-classes.md
+++ /dev/null
@@ -1 +0,0 @@
-Use the parameter to set one of these [storage classes](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#AmazonS3-PutObject-request-header-StorageClass) listed in Amazon's documentation. For more general usage information, see Amazon's [Using Amazon S3 storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html) documentation.
diff --git a/src/current/_includes/v22.1/misc/table-storage-parameters.md b/src/current/_includes/v22.1/misc/table-storage-parameters.md
deleted file mode 100644
index f4be17d72ce..00000000000
--- a/src/current/_includes/v22.1/misc/table-storage-parameters.md
+++ /dev/null
@@ -1,22 +0,0 @@
-| Parameter name | Description | Data type | Default value |
-|------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------|---------------|
-| `exclude_data_from_backup` | **New in v22.1:** Excludes the data in this table from any future backups. | Boolean | `false` |
-| `sql_stats_automatic_collection_enabled` | Enable [automatic statistics collection](cost-based-optimizer.html#enable-and-disable-automatic-statistics-collection-for-tables) for this table. | Boolean | `true` |
-| `sql_stats_automatic_collection_min_stale_rows` | Minimum number of stale rows in this table that will trigger a statistics refresh. | Integer | 500 |
-| `sql_stats_automatic_collection_fraction_stale_rows` | Fraction of stale rows in this table that will trigger a statistics refresh. | Float | 0.2 |
-| `ttl` | Signifies if a TTL is active. Automatically set and controls the reset of all TTL-related storage parameters. | N/A | N/A |
-| `ttl_automatic_column` | If set, use the value of the `crdb_internal_expiration` hidden column. Always set to `true` and cannot be reset. | Boolean | `true` |
-| `ttl_delete_batch_size` | The number of rows to [delete](delete.html) at a time. Minimum: `1`. | Integer | `100` |
-| `ttl_delete_rate_limit` | The maximum number of rows to be deleted per second (rate limit). `0` means no limit. | Integer | `0` |
-| `ttl_expire_after` | The [interval](interval.html) when a TTL will expire. This parameter is required to enable TTL. Minimum: `'1 microsecond'`.
Use `RESET (ttl)` to remove from the table. | Interval | N/A |
-| `ttl_job_cron` | The frequency at which the TTL job runs. | [CRON syntax](https://cron.help) | `'@hourly'` |
-| `ttl_label_metrics` | Whether or not [TTL metrics](row-level-ttl.html#ttl-metrics) are labelled by table name (at the risk of added cardinality). | Boolean | `false` |
-| `ttl_pause` | If set, stops the TTL job from executing. | Boolean | `false` |
-| `ttl_range_concurrency` | The Row-Level TTL queries split up scans by ranges, and this determines how many concurrent ranges are processed at a time. Minimum: `1`. | Integer | `1` |
-| `ttl_row_stats_poll_interval` | If set, counts rows and expired rows on the table to report as Prometheus metrics while the TTL job is running. Unset by default, meaning no stats are fetched and reported. | Interval | N/A |
-| `ttl_select_batch_size` | The number of rows to [select](select-clause.html) at one time during the row expiration check. Minimum: `1`. | Integer | `500` |
-
-The following parameters are included for PostgreSQL compatibility and do not affect how CockroachDB runs:
-
-- `autovacuum_enabled`
-- `fillfactor`
diff --git a/src/current/_includes/v22.1/misc/tooling.md b/src/current/_includes/v22.1/misc/tooling.md
deleted file mode 100644
index 4dcb68f3941..00000000000
--- a/src/current/_includes/v22.1/misc/tooling.md
+++ /dev/null
@@ -1,90 +0,0 @@
-## Support levels
-
-Cockroach Labs has partnered with open-source projects, vendors, and individuals to offer the following levels of support with third-party tools:
-
-- **Full support** indicates that Cockroach Labs is committed to maintaining compatibility with the vast majority of the tool's features. CockroachDB is regularly tested against the latest version documented in the table below.
-- **Partial support** indicates that Cockroach Labs is working towards full support for the tool. The primary features of the tool are compatible with CockroachDB (e.g., connecting and basic database operations), but full integration may require additional steps, lack support for all features, or exhibit unexpected behavior.
-- **Partner supported** indicates that Cockroach Labs has a partnership with a third-party vendor that provides support for the CockroachDB integration with their tool.
-
-{{site.data.alerts.callout_info}}
-Unless explicitly stated, support for a [driver](#drivers) or [data access framework](#data-access-frameworks-e-g-orms) does not include [automatic, client-side transaction retry handling](transactions.html#client-side-intervention). For client-side transaction retry handling samples, see [Example Apps](example-apps.html).
-{{site.data.alerts.end}}
-
-If you encounter problems using CockroachDB with any of the tools listed on this page, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward better support.
-
-For a list of tools supported by the CockroachDB community, see [Third-Party Tools Supported by the Community](community-tooling.html).
-
-## Drivers
-
-| Language | Driver | Latest tested version | Support level | CockroachDB adapter | Tutorial |
-|----------+--------+-----------------------+---------------------+---------------------+----------|
-| C | [libpq](http://www.postgresql.org/docs/13/static/libpq.html)| PostgreSQL 13 | Beta | N/A | N/A |
-| C# (.NET) | [Npgsql](https://www.nuget.org/packages/Npgsql/) | 7.0.2 | Full | N/A | [Build a C# App with CockroachDB (Npgsql)](build-a-csharp-app-with-cockroachdb.html) |
-| Go | [pgx](https://github.com/jackc/pgx/releases)
[pq](https://github.com/lib/pq) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/pgx.go ||var supportedPGXTag = "||"\n\n %} (use latest version of CockroachDB adapter){% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/libpq.go ||var libPQSupportedTag = "||"\n\n %} | Full
Full | [`crdbpgx`](https://pkg.go.dev/github.com/cockroachdb/cockroach-go/crdb/crdbpgx) (includes client-side transaction retry handling)N/A | [Build a Go App with CockroachDB (pgx)](build-a-go-app-with-cockroachdb.html)
[Build a Go App with CockroachDB (pq)](build-a-go-app-with-cockroachdb-pq.html) |
-| Java | [JDBC](https://jdbc.postgresql.org/download/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/pgjdbc.go ||var supportedPGJDBCTag = "||"\n\n %} | Full | N/A | [Build a Java App with CockroachDB (JDBC)](build-a-java-app-with-cockroachdb.html) |
-| JavaScript | [pg](https://www.npmjs.com/package/pg) | 8.2.1 | Full | N/A | [Build a Node.js App with CockroachDB (pg)](build-a-nodejs-app-with-cockroachdb.html) |
-| Python | [psycopg3](https://www.psycopg.org/psycopg3/docs/)
{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/gopg.go ||var gopgSupportedTag = "||"\n\n %}v4 | Full
FullFull | [`crdbgorm`](https://pkg.go.dev/github.com/cockroachdb/cockroach-go/crdb/crdbgorm) (includes client-side transaction retry handling)N/AN/A | [Build a Go App with CockroachDB (GORM)](build-a-go-app-with-cockroachdb-gorm.html)
N/A[Build a Go App with CockroachDB (upper/db)](build-a-go-app-with-cockroachdb-upperdb.html) |
-| Java | [Hibernate](https://hibernate.org/orm/) (including [Hibernate Spatial](https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#spatial))[jOOQ](https://www.jooq.org/)[MyBatis](https://mybatis.org/mybatis-3/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-22.1/pkg/cmd/roachtest/tests/hibernate.go ||var supportedHibernateTag = "||"\n\n %} (must be at least 5.4.19)
3.13.2 (must be at least 3.13.0)3.5.5| Full
FullFull | N/A
N/AN/A | [Build a Java App with CockroachDB (Hibernate)](build-a-java-app-with-cockroachdb-hibernate.html)
[Build a Java App with CockroachDB (jOOQ)](build-a-java-app-with-cockroachdb-jooq.html)[Build a Spring App with CockroachDB (MyBatis)](build-a-spring-app-with-cockroachdb-mybatis.html) |
-| JavaScript/TypeScript | [Sequelize](https://www.npmjs.com/package/sequelize)
N/AN/A[`sqlalchemy-cockroachdb`](https://pypi.org/project/sqlalchemy-cockroachdb) (includes client-side transaction retry handling) | [Build a Python App with CockroachDB (Django)](build-a-python-app-with-cockroachdb-django.html)
N/A (See [peewee docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#cockroach-database).)[Build a Python App with CockroachDB (SQLAlchemy)](build-a-python-app-with-cockroachdb-sqlalchemy.html) |
-
-## Application frameworks
-
-| Framework | Data access | Latest tested version | Support level | Tutorial |
-|-----------+-------------+-----------------------+---------------+----------|
-| Spring | [JDBC](build-a-spring-app-with-cockroachdb-jdbc.html)[JPA (Hibernate)](build-a-spring-app-with-cockroachdb-jpa.html)[MyBatis](build-a-spring-app-with-cockroachdb-mybatis.html) | See individual Java ORM or [driver](#drivers) for data access version support. | See individual Java ORM or [driver](#drivers) for data access support level. | [Build a Spring App with CockroachDB (JDBC)](build-a-spring-app-with-cockroachdb-jdbc.html)[Build a Spring App with CockroachDB (JPA)](build-a-spring-app-with-cockroachdb-jpa.html)[Build a Spring App with CockroachDB (MyBatis)](build-a-spring-app-with-cockroachdb-mybatis.html)
-
-## Graphical user interfaces (GUIs)
-
-| GUI | Latest tested version | Support level | Tutorial |
-|-----+-----------------------+---------------+----------|
-| [DBeaver](https://dbeaver.com/) | 5.2.3 | Full | [Visualize CockroachDB Schemas with DBeaver](dbeaver.html)
-
-## Integrated development environments (IDEs)
-
-| IDE | Latest tested version | Support level | Tutorial |
-|-----+-----------------------+---------------+----------|
-| [DataGrip](https://www.jetbrains.com/datagrip/) | 2021.1 | Full | N/A
-| [IntelliJ IDEA](https://www.jetbrains.com/idea/) | 2021.1 | Full | [Use IntelliJ IDEA with CockroachDB](intellij-idea.html)
-
-## Enhanced data security tools
-
-| Tool | Support level | Integration |
-|-----+---------------+----------|
-| [Satori](https://satoricyber.com/) | Partner supported | [Satori Integration](satori-integration.html) |
-| [HashiCorp Vault](https://www.vaultproject.io/) | Partner supported | [HashiCorp Vault Integration](hashicorp-integration.html) |
-
-## Schema migration tools
-
-| Tool | Latest tested version | Support level | Tutorial |
-|-----+------------------------+----------------+----------|
-| [Alembic](https://alembic.sqlalchemy.org/en/latest/) | 1.7 | Full | [Migrate CockroachDB Schemas with Alembic](alembic.html)
-| [Flyway](https://flywaydb.org/documentation/commandline/#download-and-installation) | 7.1.0 | Full | [Migrate CockroachDB Schemas with Flyway](flyway.html)
-| [Liquibase](https://www.liquibase.org/download) | 4.2.0 | Full | [Migrate CockroachDB Schemas with Liquibase](liquibase.html)
-| [Prisma](https://prisma.io) | 3.14.0 | Full | [Build a Node.js App with CockroachDB (Prisma)](build-a-nodejs-app-with-cockroachdb-prisma.html)
-
-## Data migration tools
-
-| Tool | Latest tested version | Support level | Tutorial |
-|-----+------------------------+----------------+----------|
-| [AWS DMS](https://aws.amazon.com/dms/) | 3.4.6 | Beta | [Migrate your database to CockroachDB with AWS DMS](aws-dms.html)
-
-## Provisioning tools
-| Tool | Latest tested version | Support level | Documentation |
-|------+-----------------------+---------------+---------------|
-| [Terraform](https://terraform.io/) | 1.3.2 | Beta | [Terraform provider for CockroachDB Cloud](https://github.com/cockroachdb/terraform-provider-cockroach#get-started) |
-
-## Other tools
-
-| Tool | Latest tested version | Support level | Tutorial |
-|-----+------------------------+---------------+----------|
-| [Flowable](https://github.com/flowable/flowable-engine) | 6.4.2 | Full | [Getting Started with Flowable and CockroachDB (external)](https://blog.flowable.org/2019/07/11/getting-started-with-flowable-and-cockroachdb/)
diff --git a/src/current/_includes/v22.1/misc/userfile.md b/src/current/_includes/v22.1/misc/userfile.md
deleted file mode 100644
index 1a23d5d2c39..00000000000
--- a/src/current/_includes/v22.1/misc/userfile.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
- CockroachDB now supports uploading files to a [user-scoped file storage](use-userfile-for-bulk-operations.html) using a SQL connection. We recommend using `userfile` instead of `nodelocal`, as it is user-scoped and more secure.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/orchestration/apply-custom-resource.md b/src/current/_includes/v22.1/orchestration/apply-custom-resource.md
deleted file mode 100644
index e7aacf41a1e..00000000000
--- a/src/current/_includes/v22.1/orchestration/apply-custom-resource.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Apply the new settings to the cluster:
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ kubectl apply -f example.yaml
-~~~
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/apply-helm-values.md b/src/current/_includes/v22.1/orchestration/apply-helm-values.md
deleted file mode 100644
index 90f9c8783f8..00000000000
--- a/src/current/_includes/v22.1/orchestration/apply-helm-values.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Apply the custom values to override the default Helm chart [values](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml):
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ helm upgrade {release-name} --values {custom-values}.yaml cockroachdb/cockroachdb
-~~~
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/apply-statefulset-manifest.md b/src/current/_includes/v22.1/orchestration/apply-statefulset-manifest.md
deleted file mode 100644
index 0236903c497..00000000000
--- a/src/current/_includes/v22.1/orchestration/apply-statefulset-manifest.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Apply the new settings to the cluster:
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ kubectl apply -f {statefulset-manifest}.yaml
-~~~
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-basic-sql.md b/src/current/_includes/v22.1/orchestration/kubernetes-basic-sql.md
deleted file mode 100644
index f7cfbd76641..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-basic-sql.md
+++ /dev/null
@@ -1,44 +0,0 @@
-1. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts VALUES (1, 1000.50);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- id | balance
- +----+---------+
- 1 | 1000.50
- (1 row)
- ~~~
-
-1. [Create a user with a password](create-user.html#create-a-user-with-a-password):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
- ~~~
-
- You will need this username and password to access the DB Console later.
-
-1. Exit the SQL shell and pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-cockroach-cert.md b/src/current/_includes/v22.1/orchestration/kubernetes-cockroach-cert.md
deleted file mode 100644
index ff44cf183a4..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-cockroach-cert.md
+++ /dev/null
@@ -1,90 +0,0 @@
-{{site.data.alerts.callout_info}}
-The below steps use [`cockroach cert` commands](cockroach-cert.html) to quickly generate and sign the CockroachDB node and client certificates. Read our [Authentication](authentication.html#using-digital-certificates-with-cockroachdb) docs to learn about other methods of signing certificates.
-{{site.data.alerts.end}}
-
-1. Create two directories:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir certs my-safe-directory
- ~~~
-
- Directory | Description
- ----------|------------
- `certs` | You'll generate your CA certificate and all node and client certificates and keys in this directory.
- `my-safe-directory` | You'll generate your CA key in this directory and then reference the key when generating node and client certificates.
-
-1. Create the CA certificate and key pair:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-ca \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-1. Create a client certificate and key pair for the root user:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- root \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-1. Upload the client certificate and key to the Kubernetes cluster as a secret:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create secret \
- generic cockroachdb.client.root \
- --from-file=certs
- ~~~
-
- ~~~
- secret/cockroachdb.client.root created
- ~~~
-
-1. Create the certificate and key pair for your CockroachDB nodes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- localhost 127.0.0.1 \
- cockroachdb-public \
- cockroachdb-public.default \
- cockroachdb-public.default.svc.cluster.local \
- *.cockroachdb \
- *.cockroachdb.default \
- *.cockroachdb.default.svc.cluster.local \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-1. Upload the node certificate and key to the Kubernetes cluster as a secret:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create secret \
- generic cockroachdb.node \
- --from-file=certs
- ~~~
-
- ~~~
- secret/cockroachdb.node created
- ~~~
-
-1. Check that the secrets were created on the cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get secrets
- ~~~
-
- ~~~
- NAME TYPE DATA AGE
- cockroachdb.client.root Opaque 3 41m
- cockroachdb.node Opaque 5 14s
- default-token-6qjdb kubernetes.io/service-account-token 3 4m
- ~~~
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-helm.md b/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-helm.md
deleted file mode 100644
index 4ec3d2f171f..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-helm.md
+++ /dev/null
@@ -1,118 +0,0 @@
-You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes
-) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims.
-
-{{site.data.alerts.callout_info}}
-These steps assume you followed the tutorial [Deploy CockroachDB on Kubernetes](deploy-cockroachdb-with-kubernetes.html?filters=helm).
-{{site.data.alerts.end}}
-
-1. Get the persistent volume claims for the volumes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- ~~~
-
-1. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe storageclass standard
- ~~~
-
- ~~~
- Name: standard
- IsDefaultClass: Yes
- Annotations: storageclass.kubernetes.io/is-default-class=true
- Provisioner: kubernetes.io/gce-pd
- Parameters: type=pd-standard
- AllowVolumeExpansion: False
- MountOptions:
- ReclaimPolicy: Delete
- VolumeBindingMode: Immediate
- Events:
- ~~~
-
- If necessary, edit the storage class:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}'
- ~~~
-
- ~~~
- storageclass.storage.k8s.io/standard patched
- ~~~
-
-1. Edit one of the persistent volume claims to request more space:
-
- {{site.data.alerts.callout_info}}
- The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch pvc datadir-my-release-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
- ~~~
-
- ~~~
- persistentvolumeclaim/datadir-my-release-cockroachdb-0 patched
- ~~~
-
-1. Check the capacity of the persistent volume claim:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc datadir-my-release-cockroachdb-0
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m
- ~~~
-
- If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity.
-
- {{site.data.alerts.callout_success}}
- Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`.
- {{site.data.alerts.end}}
-
-1. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe pvc datadir-my-release-cockroachdb-0
- ~~~
-
- ~~~
- Waiting for user to (re-)start a pod to finish file system resize of volume on node.
- ~~~
-
-1. Delete the corresponding pod to restart it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod my-release-cockroachdb-0
- ~~~
-
- The `FileSystemResizePending` condition and message will be removed.
-
-1. View the updated persistent volume claim:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc datadir-my-release-cockroachdb-0
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m
- ~~~
-
-1. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-manual.md b/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-manual.md
deleted file mode 100644
index e6cf4bbbddb..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-expand-disk-manual.md
+++ /dev/null
@@ -1,118 +0,0 @@
-You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes
-) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims.
-
-{{site.data.alerts.callout_info}}
-These steps assume you followed the tutorial [Deploy CockroachDB on Kubernetes](deploy-cockroachdb-with-kubernetes.html?filters=manual).
-{{site.data.alerts.end}}
-
-1. Get the persistent volume claims for the volumes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- ~~~
-
-1. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe storageclass standard
- ~~~
-
- ~~~
- Name: standard
- IsDefaultClass: Yes
- Annotations: storageclass.kubernetes.io/is-default-class=true
- Provisioner: kubernetes.io/gce-pd
- Parameters: type=pd-standard
- AllowVolumeExpansion: False
- MountOptions:
- ReclaimPolicy: Delete
- VolumeBindingMode: Immediate
- Events:
- ~~~
-
- If necessary, edit the storage class:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}'
- ~~~
-
- ~~~
- storageclass.storage.k8s.io/standard patched
- ~~~
-
-1. Edit one of the persistent volume claims to request more space:
-
- {{site.data.alerts.callout_info}}
- The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch pvc datadir-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
- ~~~
-
- ~~~
- persistentvolumeclaim/datadir-cockroachdb-0 patched
- ~~~
-
-1. Check the capacity of the persistent volume claim:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc datadir-cockroachdb-0
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m
- ~~~
-
- If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity.
-
- {{site.data.alerts.callout_success}}
- Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`.
- {{site.data.alerts.end}}
-
-1. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe pvc datadir-cockroachdb-0
- ~~~
-
- ~~~
- Waiting for user to (re-)start a pod to finish file system resize of volume on node.
- ~~~
-
-1. Delete the corresponding pod to restart it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod cockroachdb-0
- ~~~
-
- The `FileSystemResizePending` condition and message will be removed.
-
-1. View the updated persistent volume claim:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc datadir-cockroachdb-0
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m
- ~~~
-
-1. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-limitations.md b/src/current/_includes/v22.1/orchestration/kubernetes-limitations.md
deleted file mode 100644
index b2a3db884c9..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-limitations.md
+++ /dev/null
@@ -1,37 +0,0 @@
-#### Kubernetes version
-
-To deploy CockroachDB {{page.version.version}}, Kubernetes 1.18 or higher is required. Cockroach Labs strongly recommends that you use a Kubernetes version that is [eligible for patch support by the Kubernetes project](https://kubernetes.io/releases/).
-
-#### Kubernetes Operator
-
-- The CockroachDB Kubernetes Operator currently deploys clusters in a single region. For multi-region deployments using manual configs, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}).
-
-- Using the Operator, you can give a new cluster an arbitrary number of [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). However, a cluster's labels cannot be modified after it is deployed. To track the status of this limitation, refer to [#993](https://github.com/cockroachdb/cockroach-operator/issues/993) in the Operator project's issue tracker.
-
-#### Helm version
-
-The CockroachDB Helm chart requires Helm 3.0 or higher. If you attempt to use an incompatible Helm version, an error like the following occurs:
-
-~~~ shell
-Error: UPGRADE FAILED: template: cockroachdb/templates/tests/client.yaml:6:14: executing "cockroachdb/templates/tests/client.yaml" at <.Values.networkPolicy.enabled>: nil pointer evaluating interface {}.enabled
-~~~
-
-The CockroachDB Helm chart is compatible with Kubernetes versions 1.22 and earlier.
-
-The CockroachDB Helm chart is currently not under active development, and no new features are planned. However, Cockroach Labs remains committed to fully supporting the Helm chart by addressing defects, providing security patches, and addressing breaking changes due to deprecations in Kubernetes APIs.
-
-A deprecation notice for the Helm chart will be provided to customers a minimum of 6 months in advance of actual deprecation.
-
-#### Network
-
-Service Name Indication (SNI) is an extension to the TLS protocol which allows a client to indicate which hostname it is attempting to connect to at the start of the TCP handshake process. The server can present multiple certificates on the same IP address and TCP port number, and one server can serve multiple secure websites or API services even if they use different certificates.
-
-Due to its order of operations, the PostgreSQL wire protocol's implementation of TLS is not compatible with SNI-based routing in the Kubernetes ingress controller. Instead, use a TCP load balancer for CockroachDB that is not shared with other services.
-
-#### Resources
-
-When starting Kubernetes, select machines with at least **4 vCPUs** and **16 GiB** of memory, and provision at least **2 vCPUs** and **8 Gi** of memory to CockroachDB per pod. These minimum settings are used by default in this deployment guide, and are appropriate for testing purposes only. On a production deployment, you should adjust the resource settings for your workload. For details, see [Resource management](configure-cockroachdb-kubernetes.html#memory-and-cpu).
-
-#### Storage
-
-At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local).
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-helm.md b/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-helm.md
deleted file mode 100644
index cbb34893aad..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-helm.md
+++ /dev/null
@@ -1,126 +0,0 @@
-Before removing a node from your cluster, you must first decommission the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
-
-{{site.data.alerts.callout_danger}}
-If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Prepare for graceful shutdown](node-shutdown.html?filters=decommission#prepare-for-graceful-shutdown).
-{{site.data.alerts.end}}
-
-1. Use the [`cockroach node status`](cockroach-node.html) command to get the internal IDs of nodes. For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach node status \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
- The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required.
-
-1. Use the [`cockroach node decommission`](cockroach-node.html) command to decommission the node with the highest number in its address, specifying its ID (in this example, node ID `4` because its address is `my-release-cockroachdb-3`):
-
- {{site.data.alerts.callout_info}}
- You must decommission the node with the highest number in its address. Kubernetes will remove the pod for the node with the highest number in its address when you reduce the replica count.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach node decommission 4 \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
- You'll then see the decommissioning status print to `stderr` as it changes:
-
- ~~~
- id | is_live | replicas | is_decommissioning | membership | is_draining
- -----+---------+----------+--------------------+-----------------+--------------
- 4 | true | 73 | true | decommissioning | false
- ~~~
-
- Once the node has been fully decommissioned, you'll see a confirmation:
-
- ~~~
- id | is_live | replicas | is_decommissioning | membership | is_draining
- -----+---------+----------+--------------------+-----------------+--------------
- 4 | true | 0 | true | decommissioning | false
- (1 row)
-
- No more data reported on target nodes. Please verify cluster health before removing the nodes.
- ~~~
-
-1. Once the node has been decommissioned, scale down your StatefulSet:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set statefulset.replicas=3 \
- --reuse-values
- ~~~
-
-1. Verify that the pod was successfully removed:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 51m
- my-release-cockroachdb-1 1/1 Running 0 47m
- my-release-cockroachdb-2 1/1 Running 0 3m
- cockroachdb-client-secure 1/1 Running 0 15m
- ...
- ~~~
-
-1. You should also remove the persistent volume that was mounted to the pod. Get the persistent volume claims for the volumes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-my-release-cockroachdb-3 Bound pvc-75e561ba-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- ~~~
-
-1. Verify that the PVC with the highest number in its name is no longer mounted to a pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe pvc datadir-my-release-cockroachdb-3
- ~~~
-
- ~~~
- Name: datadir-my-release-cockroachdb-3
- ...
- Mounted By:
- ~~~
-
-1. Remove the persistent volume by deleting the PVC:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pvc datadir-my-release-cockroachdb-3
- ~~~
-
- ~~~
- persistentvolumeclaim "datadir-my-release-cockroachdb-3" deleted
- ~~~
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-insecure.md b/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-insecure.md
deleted file mode 100644
index 872aa0859f4..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-insecure.md
+++ /dev/null
@@ -1,140 +0,0 @@
-To safely remove a node from your cluster, you must first decommission the node and only then adjust the `spec.replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
-
-{{site.data.alerts.callout_danger}}
-If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Prepare for graceful shutdown](node-shutdown.html?filters=decommission#prepare-for-graceful-shutdown).
-{{site.data.alerts.end}}
-
-1. Launch a temporary interactive pod and use the `cockroach node status` command to get the internal IDs of nodes:
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- node status \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- node status \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
-
-
-2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](cockroach-node.html) command to decommission it:
-
- {{site.data.alerts.callout_info}}
- It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node.
- {{site.data.alerts.end}}
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- node decommission \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- node decommission \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
-
-
- You'll then see the decommissioning status print to `stderr` as it changes:
-
- ~~~
- id | is_live | replicas | is_decommissioning | membership | is_draining
- -----+---------+----------+--------------------+-----------------+--------------
- 4 | true | 73 | true | decommissioning | false
- ~~~
-
- Once the node has been fully decommissioned, you'll see a confirmation:
-
- ~~~
- id | is_live | replicas | is_decommissioning | membership | is_draining
- -----+---------+----------+--------------------+-----------------+--------------
- 4 | true | 0 | true | decommissioning | false
- (1 row)
-
- No more data reported on target nodes. Please verify cluster health before removing the nodes.
- ~~~
-
-3. Once the node has been decommissioned, remove a pod from your StatefulSet:
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=3
- ~~~
-
- ~~~
- statefulset "cockroachdb" scaled
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set statefulset.replicas=3 \
- --reuse-values
- ~~~
-
-
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-manual.md b/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-manual.md
deleted file mode 100644
index c8cc789567b..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-remove-nodes-manual.md
+++ /dev/null
@@ -1,126 +0,0 @@
-Before removing a node from your cluster, you must first decommission the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
-
-{{site.data.alerts.callout_danger}}
-If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Prepare for graceful shutdown](node-shutdown.html?filters=decommission#prepare-for-graceful-shutdown).
-{{site.data.alerts.end}}
-
-1. Use the [`cockroach node status`](cockroach-node.html) command to get the internal IDs of nodes. For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach node status \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
- The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required.
-
-1. Use the [`cockroach node decommission`](cockroach-node.html) command to decommission the node with the highest number in its address, specifying its ID (in this example, node ID `4` because its address is `cockroachdb-3`):
-
- {{site.data.alerts.callout_info}}
- You must decommission the node with the highest number in its address. Kubernetes will remove the pod for the node with the highest number in its address when you reduce the replica count.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach node decommission 4 \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- You'll then see the decommissioning status print to `stderr` as it changes:
-
- ~~~
- id | is_live | replicas | is_decommissioning | membership | is_draining
- -----+---------+----------+--------------------+-----------------+--------------
- 4 | true | 73 | true | decommissioning | false
- ~~~
-
- Once the node has been fully decommissioned, you'll see a confirmation:
-
- ~~~
- id | is_live | replicas | is_decommissioning | membership | is_draining
- -----+---------+----------+--------------------+-----------------+--------------
- 4 | true | 0 | true | decommissioning | false
- (1 row)
-
- No more data reported on target nodes. Please verify cluster health before removing the nodes.
- ~~~
-
-1. Once the node has been decommissioned, scale down your StatefulSet:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=3
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb scaled
- ~~~
-
-1. Verify that the pod was successfully removed:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 51m
- cockroachdb-1 1/1 Running 0 47m
- cockroachdb-2 1/1 Running 0 3m
- cockroachdb-client-secure 1/1 Running 0 15m
- ...
- ~~~
-
-1. You should also remove the persistent volume that was mounted to the pod. Get the persistent volume claims for the volumes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pvc
- ~~~
-
- ~~~
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- datadir-cockroachdb-3 Bound pvc-75e561ba-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
- ~~~
-
-1. Verify that the PVC with the highest number in its name is no longer mounted to a pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe pvc datadir-cockroachdb-3
- ~~~
-
- ~~~
- Name: datadir-cockroachdb-3
- ...
- Mounted By:
- ~~~
-
-1. Remove the persistent volume by deleting the PVC:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pvc datadir-cockroachdb-3
- ~~~
-
- ~~~
- persistentvolumeclaim "datadir-cockroachdb-3" deleted
- ~~~
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-helm.md b/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-helm.md
deleted file mode 100644
index 8556b822651..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-helm.md
+++ /dev/null
@@ -1,118 +0,0 @@
-Before scaling CockroachDB, ensure that your Kubernetes cluster has enough worker nodes to host the number of pods you want to add. This is to ensure that two pods are not placed on the same worker node, as recommended in our [production guidance](recommended-production-settings.html#topology).
-
-For example, if you want to scale from 3 CockroachDB nodes to 4, your Kubernetes cluster should have at least 4 worker nodes. You can verify the size of your Kubernetes cluster by running `kubectl get nodes`.
-
-1. Edit your StatefulSet configuration to add another pod for the new CockroachDB node:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set statefulset.replicas=4 \
- --reuse-values
- ~~~
-
- ~~~
- Release "my-release" has been upgraded. Happy Helming!
- LAST DEPLOYED: Tue May 14 14:06:43 2019
- NAMESPACE: default
- STATUS: DEPLOYED
-
- RESOURCES:
- ==> v1beta1/PodDisruptionBudget
- NAME AGE
- my-release-cockroachdb-budget 51m
-
- ==> v1/Pod(related)
-
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 38m
- my-release-cockroachdb-1 1/1 Running 0 39m
- my-release-cockroachdb-2 1/1 Running 0 39m
- my-release-cockroachdb-3 0/1 Pending 0 0s
- my-release-cockroachdb-init-nwjkh 0/1 Completed 0 39m
-
- ...
- ~~~
-
-1. Get the name of the `Pending` CSR for the new pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get csr
- ~~~
-
- ~~~
- NAME AGE REQUESTOR CONDITION
- default.client.root 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-3 2m system:serviceaccount:default:default Pending
- node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued
- node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued
- node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued
- ...
- ~~~
-
- If you do not see a `Pending` CSR, wait a minute and try again.
-
-1. Examine the CSR for the new pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe csr default.node.my-release-cockroachdb-3
- ~~~
-
- ~~~
- Name: default.node.my-release-cockroachdb-3
- Labels:
- Annotations:
- CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500
- Requesting User: system:serviceaccount:default:default
- Status: Pending
- Subject:
- Common Name: node
- Serial Number:
- Organization: Cockroach
- Subject Alternative Names:
- DNS Names: localhost
- my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local
- my-release-cockroachdb-1.my-release-cockroachdb
- my-release-cockroachdb-public
- my-release-cockroachdb-public.default.svc.cluster.local
- IP Addresses: 127.0.0.1
- 10.48.1.6
- Events:
- ~~~
-
-1. If everything looks correct, approve the CSR for the new pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.node.my-release-cockroachdb-3
- ~~~
-
- ~~~
- certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-3 approved
- ~~~
-
-1. Verify that the new pod started successfully:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 51m
- my-release-cockroachdb-1 1/1 Running 0 47m
- my-release-cockroachdb-2 1/1 Running 0 3m
- my-release-cockroachdb-3 1/1 Running 0 1m
- cockroachdb-client-secure 1/1 Running 0 15m
- ...
- ~~~
-
-1. You can also open the [**Node List**](ui-cluster-overview-page.html#node-list) in the DB Console to ensure that the fourth node successfully joined the cluster.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-manual.md b/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-manual.md
deleted file mode 100644
index f42775704d3..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-scale-cluster-manual.md
+++ /dev/null
@@ -1,51 +0,0 @@
-Before scaling up CockroachDB, note the following [topology recommendations](recommended-production-settings.html#topology):
-
-- Each CockroachDB node (running in its own pod) should run on a separate Kubernetes worker node.
-- Each availability zone should have the same number of CockroachDB nodes.
-
-If your cluster has 3 CockroachDB nodes distributed across 3 availability zones (as in our [deployment example](deploy-cockroachdb-with-kubernetes.html?filters=manual)), we recommend scaling up by a multiple of 3 to retain an even distribution of nodes. You should therefore scale up to a minimum of 6 CockroachDB nodes, with 2 nodes in each zone.
-
-1. Run `kubectl get nodes` to list the worker nodes in your Kubernetes cluster. There should be at least as many worker nodes as pods you plan to add. This ensures that no more than one pod will be placed on each worker node.
-
-1. Add worker nodes if necessary:
- - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster). If you deployed a [regional cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster) as we recommended, you will use `--num-nodes` to specify the desired number of worker nodes in each zone. For example:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- gcloud container clusters resize {cluster-name} --region {region-name} --num-nodes 2
- ~~~
- - On EKS, resize your [Worker Node Group](https://eksctl.io/usage/managing-nodegroups/#scaling).
- - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/).
- - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html).
-
-1. Edit your StatefulSet configuration to add pods for each new CockroachDB node:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=6
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb scaled
- ~~~
-
-1. Verify that the new pod started successfully:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 51m
- cockroachdb-1 1/1 Running 0 47m
- cockroachdb-2 1/1 Running 0 3m
- cockroachdb-3 1/1 Running 0 1m
- cockroachdb-4 1/1 Running 0 1m
- cockroachdb-5 1/1 Running 0 1m
- cockroachdb-client-secure 1/1 Running 0 15m
- ...
- ~~~
-
-1. You can also open the [**Node List**](ui-cluster-overview-page.html#node-list) in the DB Console to ensure that the fourth node successfully joined the cluster.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-simulate-failure.md b/src/current/_includes/v22.1/orchestration/kubernetes-simulate-failure.md
deleted file mode 100644
index 75ea2902627..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-simulate-failure.md
+++ /dev/null
@@ -1,91 +0,0 @@
-Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage.
-
-To see this in action:
-
-1. Terminate one of the CockroachDB nodes:
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod cockroachdb-2
- ~~~
-
- ~~~
- pod "cockroachdb-2" deleted
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod cockroachdb-2
- ~~~
-
- ~~~
- pod "cockroachdb-2" deleted
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod my-release-cockroachdb-2
- ~~~
-
- ~~~
- pod "my-release-cockroachdb-2" deleted
- ~~~
-
-
-
-
-2. In the DB Console, the **Cluster Overview** will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy.
-
-3. Back in the terminal, verify that the pod was automatically restarted:
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pod cockroachdb-2
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-2 1/1 Running 0 12s
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pod cockroachdb-2
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-2 1/1 Running 0 12s
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pod my-release-cockroachdb-2
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-2 1/1 Running 0 44s
- ~~~
-
-
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-stop-cluster.md b/src/current/_includes/v22.1/orchestration/kubernetes-stop-cluster.md
deleted file mode 100644
index afc17479b82..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-stop-cluster.md
+++ /dev/null
@@ -1,145 +0,0 @@
-To shut down the CockroachDB cluster:
-
-
-{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %}
-
-1. Delete the previously created custom resource:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete -f example.yaml
- ~~~
-
-1. Remove the Operator:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml
- ~~~
-
- This will delete the CockroachDB cluster being run by the Operator. It will *not* delete the persistent volumes that were attached to the pods.
-
- {{site.data.alerts.callout_danger}}
- If you want to delete the persistent volumes and free up the storage used by CockroachDB, be sure you have a backup copy of your data. Data **cannot** be recovered once the persistent volumes are deleted. For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/#persistent-volumes).
- {{site.data.alerts.end}}
-
-{{site.data.alerts.callout_info}}
-This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl).
-{{site.data.alerts.end}}
-
-
-
-1. Delete the resources associated with the `cockroachdb` label, including the logs and Prometheus and Alertmanager resources:
-
- {{site.data.alerts.callout_danger}}
- This does not include deleting the persistent volumes that were attached to the pods. If you want to delete the persistent volumes and free up the storage used by CockroachDB, be sure you have a backup copy of your data. Data **cannot** be recovered once the persistent volumes are deleted. For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/#persistent-volumes).
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pods,statefulsets,services,poddisruptionbudget,jobs,rolebinding,clusterrolebinding,role,clusterrole,serviceaccount,alertmanager,prometheus,prometheusrule,serviceMonitor -l app=cockroachdb
- ~~~
-
- ~~~
- pod "cockroachdb-0" deleted
- pod "cockroachdb-1" deleted
- pod "cockroachdb-2" deleted
- statefulset.apps "alertmanager-cockroachdb" deleted
- statefulset.apps "prometheus-cockroachdb" deleted
- service "alertmanager-cockroachdb" deleted
- service "cockroachdb" deleted
- service "cockroachdb-public" deleted
- poddisruptionbudget.policy "cockroachdb-budget" deleted
- job.batch "cluster-init-secure" deleted
- rolebinding.rbac.authorization.k8s.io "cockroachdb" deleted
- clusterrolebinding.rbac.authorization.k8s.io "cockroachdb" deleted
- clusterrolebinding.rbac.authorization.k8s.io "prometheus" deleted
- role.rbac.authorization.k8s.io "cockroachdb" deleted
- clusterrole.rbac.authorization.k8s.io "cockroachdb" deleted
- clusterrole.rbac.authorization.k8s.io "prometheus" deleted
- serviceaccount "cockroachdb" deleted
- serviceaccount "prometheus" deleted
- alertmanager.monitoring.coreos.com "cockroachdb" deleted
- prometheus.monitoring.coreos.com "cockroachdb" deleted
- prometheusrule.monitoring.coreos.com "prometheus-cockroachdb-rules" deleted
- servicemonitor.monitoring.coreos.com "cockroachdb" deleted
- ~~~
-
-1. Delete the pod created for `cockroach` client commands, if you didn't do so earlier:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod cockroachdb-client-secure
- ~~~
-
- ~~~
- pod "cockroachdb-client-secure" deleted
- ~~~
-
-{{site.data.alerts.callout_info}}
-This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl).
-{{site.data.alerts.end}}
-
-
-
-1. Uninstall the release:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm uninstall my-release
- ~~~
-
- ~~~
- release "my-release" deleted
- ~~~
-
-1. Delete the pod created for `cockroach` client commands, if you didn't do so earlier:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod cockroachdb-client-secure
- ~~~
-
- ~~~
- pod "cockroachdb-client-secure" deleted
- ~~~
-
-1. Get the names of any CSRs for the cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get csr
- ~~~
-
- ~~~
- NAME AGE REQUESTOR CONDITION
- default.client.root 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued
- default.node.my-release-cockroachdb-3 12m system:serviceaccount:default:default Approved,Issued
- node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued
- node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued
- node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued
- ...
- ~~~
-
-1. Delete any CSRs that you created:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete csr default.client.root default.node.my-release-cockroachdb-0 default.node.my-release-cockroachdb-1 default.node.my-release-cockroachdb-2 default.node.my-release-cockroachdb-3
- ~~~
-
- ~~~
- certificatesigningrequest "default.client.root" deleted
- certificatesigningrequest "default.node.my-release-cockroachdb-0" deleted
- certificatesigningrequest "default.node.my-release-cockroachdb-1" deleted
- certificatesigningrequest "default.node.my-release-cockroachdb-2" deleted
- certificatesigningrequest "default.node.my-release-cockroachdb-3" deleted
- ~~~
-
- {{site.data.alerts.callout_info}}
- This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl).
- {{site.data.alerts.end}}
-
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-helm.md b/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-helm.md
deleted file mode 100644
index 6c796b28074..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-helm.md
+++ /dev/null
@@ -1,257 +0,0 @@
-{% assign previous_version = site.data.versions | where_exp: "previous_version", "previous_version.major_version == page.version.version" | first | map: "previous_version" %}
-
-1. Verify that you can upgrade.
-
- To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release](../releases/index.html) and not a testing release (alpha/beta).
-
- Therefore, in order to upgrade to {{ page.version.version }}, you must be on a production release of {{ previous_version }}.
-
- 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first [upgrade to a production release of {{ previous_version }}]({% link {{ previous_version }}/upgrade-cockroachdb-kubernetes.md %}?filters=helm). Be sure to complete all the steps.
-
- 1. Then return to this page and perform a second upgrade to {{ page.version.version }}.
-
- 1. If you are upgrading from any production release of {{ previous_version }}, or from any earlier {{ page.version.version }} patch release, you do not have to go through intermediate releases; continue to step 2.
-
-1. Verify the overall health of your cluster using the [DB Console](ui-overview.html). On the **Overview**:
- - Under **Node Status**, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or [decommission](scale-cockroachdb-kubernetes.html?filters=helm#remove-nodes) them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually).
- - Under **Replication Status**, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to [identify and resolve the cause of range under-replication and/or unavailability](cluster-setup-troubleshooting.html#replication-issues) before beginning your upgrade.
- - In the **Node List**:
- - Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over.
- - Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to **Metrics > Dashboard: Hardware** and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider [adding nodes](scale-cockroachdb-kubernetes.html?filters=helm#add-nodes) to your cluster before beginning your upgrade.
-
-{% assign rd = site.data.versions | where_exp: "rd", "rd.major_version == page.version.version" | first %}
-
-1. Review the [backward-incompatible changes in {{ page.version.version }}](../releases/{{ page.version.version }}.html{% unless rd.release_date == "N/A" or rd.release_date > today %}#{{ page.version.version | replace: ".", "-" }}-0-backward-incompatible-changes{% endunless %}) and [deprecated features](../releases/{{ page.version.version }}.html#{% unless rd.release_date == "N/A" or rd.release_date > today %}{{ page.version.version | replace: ".", "-" }}-0-deprecations{% endunless %}). If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}.
-
-1. Decide how the upgrade will be finalized.
-
- By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain [features and performance improvements introduced in {{ page.version.version }}](upgrade-cockroach-version.html#features-that-require-upgrade-finalization). After finalization, however, it will no longer be possible to perform a downgrade to {{ previous_version }}. In the event of a catastrophic failure or corruption, the only option is to start a new cluster using the old binary and then restore from a [backup](take-full-and-incremental-backups.html) created prior to the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in a later step.
-
- {{site.data.alerts.callout_info}}
- Finalization only applies when performing a major version upgrade (for example, from {{ previous_version }}.x to {{ page.version.version }}). Patch version upgrades (for example, within the {{ page.version.version }}.x series) can always be downgraded.
- {{site.data.alerts.end}}
-
- {% if page.secure == true %}
-
- 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
- {% else %}
-
- 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
- {% endif %}
-
- 1. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html) to the version you are upgrading from:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING cluster.preserve_downgrade_option = '{{ previous_version | remove_first: "v" }}';
- ~~~
-
- 1. Exit the SQL shell and delete the temporary pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-1. Add a [partition](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update) to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., `cockroachdb-0`, `cockroachdb-1`, `cockroachdb-2`) the partition value should be 2:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set statefulset.updateStrategy.rollingUpdate.partition=2
- ~~~
-
-1. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet:
-
- {{site.data.alerts.callout_info}}
- For Helm, you must remove the cluster initialization job from when the cluster was created before the cluster version can be changed.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete job my-release-cockroachdb-init
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set image.tag={{page.release_info.version}} \
- --reuse-values
- ~~~
-
-1. Check the status of your cluster's pods. You should see one of them being restarted:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 2m
- my-release-cockroachdb-1 1/1 Running 0 3m
- my-release-cockroachdb-2 0/1 ContainerCreating 0 25s
- my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s
- ...
- ~~~
-
- {{site.data.alerts.callout_info}}
- Ignore the pod for cluster initialization. It is re-created as a byproduct of the StatefulSet configuration but does not impact your existing cluster.
- {{site.data.alerts.end}}
-
-1. After the pod has been restarted with the new image, start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% if page.secure == true %}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
- {% else %}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
- {% endif %}
-
-1. Run the following SQL query to verify that the number of underreplicated ranges is zero:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status;
- ~~~
-
- ~~~
- ranges_underreplicated
- --------------------------
- 0
- (1 row)
- ~~~
-
- This indicates that it is safe to proceed to the next pod.
-
-1. Exit the SQL shell:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-1. Decrement the partition value by 1 to allow the next pod in the cluster to update:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm upgrade \
- my-release \
- cockroachdb/cockroachdb \
- --set statefulset.updateStrategy.rollingUpdate.partition=1 \
- ~~~
-
-1. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be `0`).
-
-1. Check the image of each pod to confirm that all have been upgraded:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods \
- -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
- ~~~
-
- ~~~
- my-release-cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}}
- my-release-cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}}
- my-release-cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}}
- ...
- ~~~
-
- You can also check the CockroachDB version of each node in the [DB Console](ui-cluster-overview-page.html#node-details).
-
-
-1. If you disabled auto-finalization earlier, monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day).
-
- If you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary.
-
- {{site.data.alerts.callout_info}}
- This is only possible when performing a major version upgrade (for example, from {{ previous_version }}.x to {{ page.version.version }}). Patch version upgrades (for example, within the {{ page.version.version }}.x series) are auto-finalized.
- {{site.data.alerts.end}}
-
- To finalize the upgrade, re-enable auto-finalization:
-
- {% if page.secure == true %}
-
- 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
- {% else %}
-
- 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
- {% endif %}
-
- 2. Re-enable auto-finalization:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > RESET CLUSTER SETTING cluster.preserve_downgrade_option;
- ~~~
-
- 3. Exit the SQL shell and delete the temporary pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
diff --git a/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-manual.md b/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-manual.md
deleted file mode 100644
index 0e4fb1b59ca..00000000000
--- a/src/current/_includes/v22.1/orchestration/kubernetes-upgrade-cluster-manual.md
+++ /dev/null
@@ -1,246 +0,0 @@
-{% assign previous_version = site.data.versions | where_exp: "previous_version", "previous_version.major_version == page.version.version" | first | map: "previous_version" %}
-
-1. Verify that you can upgrade.
-
- To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release](../releases/index.html) and not a testing release (alpha/beta).
-
- Therefore, in order to upgrade to {{ page.version.version }}, you must be on a production release of {{ previous_version }}.
-
- 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first [upgrade to a production release of {{ previous_version }}]({% link {{ previous_version }}/upgrade-cockroachdb-kubernetes.md %}?filters=manual). Be sure to complete all the steps.
-
- 1. Then return to this page and perform a second upgrade to {{ page.version.version }}.
-
- 1. If you are upgrading from any production release of {{ previous_version }}, or from any earlier {{ page.version.version }} patch release, you do not have to go through intermediate releases; continue to step 2.
-
-1. Verify the overall health of your cluster using the [DB Console](ui-overview.html). On the **Overview**:
- - Under **Node Status**, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or [decommission](scale-cockroachdb-kubernetes.html?filters=manual#remove-nodes) them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually).
- - Under **Replication Status**, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to [identify and resolve the cause of range under-replication and/or unavailability](cluster-setup-troubleshooting.html#replication-issues) before beginning your upgrade.
- - In the **Node List**:
- - Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over.
- - Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to **Metrics > Dashboard: Hardware** and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider [adding nodes](scale-cockroachdb-kubernetes.html?filters=manual#add-nodes) to your cluster before beginning your upgrade.
-
-{% assign rd = site.data.versions | where_exp: "rd", "rd.major_version == page.version.version" | first %}
-
-1. Review the [backward-incompatible changes in {{ page.version.version }}](../releases/{{ page.version.version }}.html{% unless rd.release_date == "N/A" or rd.release_date > today %}#{{ page.version.version | replace: ".", "-" }}-0-backward-incompatible-changes{% endunless %}) and [deprecated features](../releases/{{ page.version.version }}.html#{% unless rd.release_date == "N/A" or rd.release_date > today %}{{ page.version.version | replace: ".", "-" }}-0-deprecations{% endunless %}). If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}.
-
-1. Decide how the upgrade will be finalized.
-
- By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain [features and performance improvements introduced in {{ page.version.version }}](upgrade-cockroach-version.html#features-that-require-upgrade-finalization). After finalization, however, it will no longer be possible to perform a downgrade to {{ previous_version }}. In the event of a catastrophic failure or corruption, the only option is to start a new cluster using the old binary and then restore from a [backup](take-full-and-incremental-backups.html) created prior to the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in a later step.
-
- {{site.data.alerts.callout_info}}
- Finalization only applies when performing a major version upgrade (for example, from {{ previous_version }}.x to {{ page.version.version }}). Patch version upgrades (for example, within the {{ page.version.version }}.x series) can always be downgraded.
- {{site.data.alerts.end}}
-
- {% if page.secure == true %}
-
- 1. Start the CockroachDB [built-in SQL client](cockroach-sql.html). For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html?filters=manual#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- {% else %}
-
- 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
- {% endif %}
-
- 1. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html) to the version you are upgrading from:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING cluster.preserve_downgrade_option = '{{ previous_version | remove_first: "v" }}';
- ~~~
-
- 1. Exit the SQL shell and delete the temporary pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-1. Add a [partition](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update) to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., `cockroachdb-0`, `cockroachdb-1`, `cockroachdb-2`) the partition value should be 2:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch statefulset cockroachdb \
- -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb patched
- ~~~
-
-1. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch statefulset cockroachdb \
- --type='json' \
- -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:{{page.release_info.version}}"}]'
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb patched
- ~~~
-
-1. Check the status of your cluster's pods. You should see one of them being restarted:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 2m
- cockroachdb-1 1/1 Running 0 2m
- cockroachdb-2 0/1 Terminating 0 1m
- ...
- ~~~
-
-1. After the pod has been restarted with the new image, start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% if page.secure == true %}
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- {% else %}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
- {% endif %}
-
-1. Run the following SQL query to verify that the number of underreplicated ranges is zero:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status;
- ~~~
-
- ~~~
- ranges_underreplicated
- --------------------------
- 0
- (1 row)
- ~~~
-
- This indicates that it is safe to proceed to the next pod.
-
-1. Exit the SQL shell:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-1. Decrement the partition value by 1 to allow the next pod in the cluster to update:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch statefulset cockroachdb \
- -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":1}}}}'
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb patched
- ~~~
-
-1. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be `0`).
-
-1. Check the image of each pod to confirm that all have been upgraded:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods \
- -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
- ~~~
-
- ~~~
- cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}}
- cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}}
- cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}}
- ...
- ~~~
-
- You can also check the CockroachDB version of each node in the [DB Console](ui-cluster-overview-page.html#node-details).
-
-1. If you disabled auto-finalization earlier, monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day).
-
- If you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary.
-
- {{site.data.alerts.callout_info}}
- This is only possible when performing a major version upgrade (for example, from {{ previous_version }}.x to {{ page.version.version }}). Patch version upgrades (for example, within the {{ page.version.version }}.x series) are auto-finalized.
- {{site.data.alerts.end}}
-
- To finalize the upgrade, re-enable auto-finalization:
-
- {% if page.secure == true %}
-
- 1. Start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- {% else %}
-
- 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
- {% endif %}
-
- 2. Re-enable auto-finalization:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > RESET CLUSTER SETTING cluster.preserve_downgrade_option;
- ~~~
-
- 3. Exit the SQL shell and delete the temporary pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
diff --git a/src/current/_includes/v22.1/orchestration/local-start-kubernetes.md b/src/current/_includes/v22.1/orchestration/local-start-kubernetes.md
deleted file mode 100644
index e504d052dbe..00000000000
--- a/src/current/_includes/v22.1/orchestration/local-start-kubernetes.md
+++ /dev/null
@@ -1,24 +0,0 @@
-## Before you begin
-
-Before getting started, it's helpful to review some Kubernetes-specific terminology:
-
-Feature | Description
---------|------------
-[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation.
-[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4.
-[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5.
-[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.
When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted.
-[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node.
-
-## Step 1. Start Kubernetes
-
-1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation.
-
- {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}}
-
-2. Start a local Kubernetes cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ minikube start
- ~~~
diff --git a/src/current/_includes/v22.1/orchestration/monitor-cluster.md b/src/current/_includes/v22.1/orchestration/monitor-cluster.md
deleted file mode 100644
index 94043bf91ea..00000000000
--- a/src/current/_includes/v22.1/orchestration/monitor-cluster.md
+++ /dev/null
@@ -1,110 +0,0 @@
-To access the cluster's [DB Console](ui-overview.html):
-
-{% if page.secure == true %}
-
-1. On secure clusters, [certain pages of the DB Console](ui-overview.html#db-console-access) can only be accessed by `admin` users.
-
- Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
-
-
-1. Assign `roach` to the `admin` role (you only need to do this once):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > GRANT admin TO roach;
- ~~~
-
-1. Exit the SQL shell and pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-{% endif %}
-
-1. In a new terminal window, port-forward from your local machine to the `cockroachdb-public` service:
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward service/cockroachdb-public 8080
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward service/cockroachdb-public 8080
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward service/my-release-cockroachdb-public 8080
- ~~~
-
-
-
- ~~~
- Forwarding from 127.0.0.1:8080 -> 8080
- ~~~
-
- {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the DB Console. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}}
-
-{% if page.secure == true %}
-
-1. Go to https://localhost:8080 and log in with the username and password you created earlier.
-
- {% include {{ page.version.version }}/misc/chrome-localhost.md %}
-
-{% else %}
-
-1. Go to http://localhost:8080.
-
-{% endif %}
-
-1. In the UI, verify that the cluster is running as expected:
- - View the [Node List](ui-cluster-overview-page.html#node-list) to ensure that all nodes successfully joined the cluster.
- - Click the **Databases** tab on the left to verify that `bank` is listed.
diff --git a/src/current/_includes/v22.1/orchestration/operator-check-namespace.md b/src/current/_includes/v22.1/orchestration/operator-check-namespace.md
deleted file mode 100644
index d6c70aa03dc..00000000000
--- a/src/current/_includes/v22.1/orchestration/operator-check-namespace.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-All `kubectl` steps should be performed in the [namespace where you installed the Operator](deploy-cockroachdb-with-kubernetes.html#install-the-operator). By default, this is `cockroach-operator-system`.
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-insecure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-insecure.md
deleted file mode 100644
index e78276828f0..00000000000
--- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-insecure.md
+++ /dev/null
@@ -1,115 +0,0 @@
-{{site.data.alerts.callout_danger}}
-The CockroachDB Helm chart is undergoing maintenance for compatibility with Kubernetes versions 1.17 through 1.21 (the latest version as of this writing). No new feature development is currently planned. For new production and local deployments, we currently recommend using a manual configuration (**Configs** option). If you are experiencing issues with a Helm deployment on production, contact our [Support team](https://support.cockroachlabs.com/).
-{{site.data.alerts.end}}
-
-1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm repo add cockroachdb https://charts.cockroachdb.com/
- ~~~
-
- ~~~
- "cockroachdb" has been added to your repositories
- ~~~
-
-1. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm repo update
- ~~~
-
-1. Modify our Helm chart's [`values.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml) parameters for your deployment scenario.
-
- Create a `my-values.yaml` file to override the defaults in `values.yaml`, substituting your own values in this example based on the guidelines below.
-
- {% include_cached copy-clipboard.html %}
- ~~~
- statefulset:
- resources:
- limits:
- memory: "8Gi"
- requests:
- memory: "8Gi"
- conf:
- cache: "2Gi"
- max-sql-memory: "2Gi"
- ~~~
-
- 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`.
-
- {{site.data.alerts.callout_success}}
- For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`.
- {{site.data.alerts.end}}
-
-1. For an insecure deployment, set `tls.enabled` to `false`. For clarity, this example includes the example configuration from the previous steps.
-
- {% include_cached copy-clipboard.html %}
- ~~~
- statefulset:
- resources:
- limits:
- memory: "8Gi"
- requests:
- memory: "8Gi"
- conf:
- cache: "2Gi"
- max-sql-memory: "2Gi"
- tls:
- enabled: false
- ~~~
-
- 1. You may want to modify `storage.persistentVolume.size` and `storage.persistentVolume.storageClass` for your use case. This chart defaults to 100Gi of disk space per pod. For more details on customizing disks for performance, see [these instructions](kubernetes-performance.html#disk-type).
-
- {{site.data.alerts.callout_info}}
- If necessary, you can [expand disk size](/docs/{{ page.version.version }}/configure-cockroachdb-kubernetes.html?filters=helm#expand-disk-size) after the cluster is live.
- {{site.data.alerts.end}}
-
-1. Install the CockroachDB Helm chart.
-
- Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`.
-
- {{site.data.alerts.callout_info}}
- This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb
- ~~~
-
- Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
-
-1. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 8m
- my-release-cockroachdb-1 1/1 Running 0 8m
- my-release-cockroachdb-2 1/1 Running 0 8m
- my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
- ~~~
-
-1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
- pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
- pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to logs for a pod, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-secure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-secure.md
deleted file mode 100644
index cd8ac2e7b46..00000000000
--- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-helm-secure.md
+++ /dev/null
@@ -1,112 +0,0 @@
-The CockroachDB Helm chart is compatible with Kubernetes versions 1.22 and earlier.
-
-The CockroachDB Helm chart is currently not under active development, and no new features are planned. However, Cockroach Labs remains committed to fully supporting the Helm chart by addressing defects, providing security patches, and addressing breaking changes due to deprecations in Kubernetes APIs.
-
-A deprecation notice for the Helm chart will be provided to customers a minimum of 6 months in advance of actual deprecation.
-
-{{site.data.alerts.callout_danger}}
-If you are running a secure Helm deployment on Kubernetes 1.22 and later, you must migrate away from using the Kubernetes CA for cluster authentication. For details, see [Certificate management](secure-cockroachdb-kubernetes.html?filters=helm#migration-to-self-signer).
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_info}}
-Secure CockroachDB deployments on Amazon EKS via Helm are [not yet supported](https://github.com/cockroachdb/cockroach/issues/38847).
-{{site.data.alerts.end}}
-
-1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm repo add cockroachdb https://charts.cockroachdb.com/
- ~~~
-
- ~~~
- "cockroachdb" has been added to your repositories
- ~~~
-
-1. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm repo update
- ~~~
-
-1. The cluster configuration is set in the Helm chart's [values file](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml).
-
- {{site.data.alerts.callout_info}}
- By default, the Helm chart specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Configure the Cluster](configure-cockroachdb-kubernetes.html?filters=helm).
- {{site.data.alerts.end}}
-
- Before deploying, modify some parameters in our Helm chart's [values file](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml):
-
- 1. Create a local YAML file (e.g., `my-values.yaml`) to specify your custom values. These will be used to override the defaults in `values.yaml`.
-
- 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`.
-
- {{site.data.alerts.callout_success}}
- For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ yaml
- conf:
- cache: "2Gi"
- max-sql-memory: "2Gi"
- ~~~
-
- The Helm chart defaults to a secure deployment by automatically setting `tls.enabled` to `true`.
-
- {{site.data.alerts.callout_info}}
- By default, the Helm chart will generate and sign 1 client and 1 node certificate to secure the cluster. To authenticate using your own CA, see [Certificate management](/docs/{{ page.version.version }}/secure-cockroachdb-kubernetes.html?filters=helm#use-a-custom-ca).
- {{site.data.alerts.end}}
-
-1. Install the CockroachDB Helm chart, specifying your custom values file.
-
- Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`.
-
- {{site.data.alerts.callout_info}}
- This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands.
- {{site.data.alerts.end}}
-
- {{site.data.alerts.callout_danger}}
- To allow the CockroachDB pods to successfully deploy, do not set the [`--wait` flag](https://helm.sh/docs/intro/using_helm/#helpful-options-for-installupgraderollback) when using Helm commands.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm install my-release --values {custom-values}.yaml cockroachdb/cockroachdb
- ~~~
-
- Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
-
-1. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 8m
- my-release-cockroachdb-1 1/1 Running 0 8m
- my-release-cockroachdb-2 1/1 Running 0 8m
- my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
- ~~~
-
-1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
- pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
- pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to logs for a pod, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-insecure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-insecure.md
deleted file mode 100644
index c0692798b67..00000000000
--- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-insecure.md
+++ /dev/null
@@ -1,114 +0,0 @@
-1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it.
-
- Download [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
- ~~~
-
- {{site.data.alerts.callout_info}}
- By default, this manifest specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Resource management](configure-cockroachdb-kubernetes.html?filters=manual).
- {{site.data.alerts.end}}
-
- Use the file to create the StatefulSet and start the cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset.yaml
- ~~~
-
- ~~~
- service/cockroachdb-public created
- service/cockroachdb created
- poddisruptionbudget.policy/cockroachdb-budget created
- statefulset.apps/cockroachdb created
- ~~~
-
- Alternatively, if you'd rather start with a configuration file that has been customized for performance:
-
- 1. Download our [performance version of `cockroachdb-statefulset-insecure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml
- ~~~
-
- 2. Modify the file wherever there is a `TODO` comment.
-
- 3. Use the file to create the StatefulSet and start the cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset-insecure.yaml
- ~~~
-
-2. Confirm that three pods are `Running` successfully. Note that they will not
- be considered `Ready` until after the cluster has been initialized:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
-3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get persistentvolumes
- ~~~
-
- ~~~
- NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
- pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
- pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
- pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
- ~~~
-
-4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml
- ~~~
-
- ~~~
- job.batch/cluster-init created
- ~~~
-
-5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get job cluster-init
- ~~~
-
- ~~~
- NAME COMPLETIONS DURATION AGE
- cluster-init 1/1 7s 27s
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cluster-init-cqf8l 0/1 Completed 0 56s
- cockroachdb-0 1/1 Running 0 7m51s
- cockroachdb-1 1/1 Running 0 7m51s
- cockroachdb-2 1/1 Running 0 7m51s
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-helm-insecure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-helm-insecure.md
deleted file mode 100644
index 494b3e6207e..00000000000
--- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-helm-insecure.md
+++ /dev/null
@@ -1,65 +0,0 @@
-1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm repo add cockroachdb https://charts.cockroachdb.com/
- ~~~
-
- ~~~
- "cockroachdb" has been added to your repositories
- ~~~
-
-2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm repo update
- ~~~
-
-3. Install the CockroachDB Helm chart.
-
- Provide a "release" name to identify and track this particular deployment of the chart.
-
- {{site.data.alerts.callout_info}}
- This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ helm install my-release cockroachdb/cockroachdb
- ~~~
-
- Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
-
-4. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 8m
- my-release-cockroachdb-1 1/1 Running 0 8m
- my-release-cockroachdb-2 1/1 Running 0 8m
- my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
- ~~~
-
-5. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
- pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
- pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-insecure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-insecure.md
deleted file mode 100644
index 37fe8e46939..00000000000
--- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-local-insecure.md
+++ /dev/null
@@ -1,83 +0,0 @@
-1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
- ~~~
-
- ~~~
- service/cockroachdb-public created
- service/cockroachdb created
- poddisruptionbudget.policy/cockroachdb-budget created
- statefulset.apps/cockroachdb created
- ~~~
-
-2. Confirm that three pods are `Running` successfully. Note that they will not
- be considered `Ready` until after the cluster has been initialized:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
-3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
- pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
- pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
- pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
- ~~~
-
-4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml
- ~~~
-
- ~~~
- job.batch/cluster-init created
- ~~~
-
-5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get job cluster-init
- ~~~
-
- ~~~
- NAME COMPLETIONS DURATION AGE
- cluster-init 1/1 7s 27s
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cluster-init-cqf8l 0/1 Completed 0 56s
- cockroachdb-0 1/1 Running 0 7m51s
- cockroachdb-1 1/1 Running 0 7m51s
- cockroachdb-2 1/1 Running 0 7m51s
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-operator-secure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-operator-secure.md
deleted file mode 100644
index bb8c3b445e6..00000000000
--- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-operator-secure.md
+++ /dev/null
@@ -1,125 +0,0 @@
-### Install the Operator
-
-{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %}
-
-1. Apply the [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) for the Operator:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/crds.yaml
- ~~~
-
- ~~~
- customresourcedefinition.apiextensions.k8s.io/crdbclusters.crdb.cockroachlabs.com created
- ~~~
-
-1. By default, the Operator is configured to install in the `cockroach-operator-system` namespace and to manage CockroachDB instances for all namespaces on the cluster.
-
- If you'd like to change either of these defaults:
-
- 1. Download the Operator manifest:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml
- ~~~
-
- 1. To use a custom namespace, edit all instances of `namespace: cockroach-operator-system` with your desired namespace.
-
- 1. To limit the namespaces that will be monitored, set the `WATCH_NAMESPACE` environment variable in the `Deployment` pod spec. This can be set to a single namespace, or a comma-delimited set of namespaces. When set, only those `CrdbCluster` resources in the supplied namespace(s) will be reconciled.
-
- 1. Instead of using the command below, apply your local version of the Operator manifest to the cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f operator.yaml
- ~~~
-
- If you want to use the default namespace settings:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml
- ~~~
-
- ~~~
- clusterrole.rbac.authorization.k8s.io/cockroach-database-role created
- serviceaccount/cockroach-database-sa created
- clusterrolebinding.rbac.authorization.k8s.io/cockroach-database-rolebinding created
- role.rbac.authorization.k8s.io/cockroach-operator-role created
- clusterrolebinding.rbac.authorization.k8s.io/cockroach-operator-rolebinding created
- clusterrole.rbac.authorization.k8s.io/cockroach-operator-role created
- serviceaccount/cockroach-operator-sa created
- rolebinding.rbac.authorization.k8s.io/cockroach-operator-default created
- deployment.apps/cockroach-operator created
- ~~~
-
-1. Set your current namespace to the one used by the Operator. For example, to use the Operator's default namespace:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl config set-context --current --namespace=cockroach-operator-system
- ~~~
-
-1. Validate that the Operator is running:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroach-operator-6f7b86ffc4-9ppkv 1/1 Running 0 54s
- ~~~
-
-### Initialize the cluster
-
-{{site.data.alerts.callout_info}}
-After a cluster managed by the Kubernetes operator is initialized, its Kubernetes labels cannot be modified. For more details, refer to [Limitations](#limitations).
-{{site.data.alerts.end}}
-
-1. Download `example.yaml`, a custom resource that tells the Operator how to configure the Kubernetes cluster.
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/examples/example.yaml
- ~~~
-
- {{site.data.alerts.callout_info}}
- By default, this custom resource specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Configure the Cluster](configure-cockroachdb-kubernetes.html).
- {{site.data.alerts.end}}
-
- {{site.data.alerts.callout_info}}
- By default, the Operator will generate and sign 1 client and 1 node certificate to secure the cluster. This means that if you do not provide a CA, a `cockroach`-generated CA is used. If you want to authenticate using your own CA, [specify the generated secrets in the custom resource](secure-cockroachdb-kubernetes.html#use-a-custom-ca) **before** proceeding to the next step.
- {{site.data.alerts.end}}
-
-1. Apply `example.yaml`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f example.yaml
- ~~~
-
- The Operator will create a StatefulSet and initialize the nodes as a cluster.
-
- ~~~
- crdbcluster.crdb.cockroachlabs.com/cockroachdb created
- ~~~
-
-1. Check that the pods were created:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroach-operator-6f7b86ffc4-9t9zb 1/1 Running 0 3m22s
- cockroachdb-0 1/1 Running 0 2m31s
- cockroachdb-1 1/1 Running 0 102s
- cockroachdb-2 1/1 Running 0 46s
- ~~~
-
- Each pod should have `READY` status soon after being created.
diff --git a/src/current/_includes/v22.1/orchestration/start-cockroachdb-secure.md b/src/current/_includes/v22.1/orchestration/start-cockroachdb-secure.md
deleted file mode 100644
index 972cabc2d8e..00000000000
--- a/src/current/_includes/v22.1/orchestration/start-cockroachdb-secure.md
+++ /dev/null
@@ -1,108 +0,0 @@
-### Configure the cluster
-
-1. Download and modify our [StatefulSet configuration](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml
- ~~~
-
-1. Update `secretName` with the name of the corresponding node secret.
-
- The secret names depend on your method for generating secrets. For example, if you follow the below [steps using `cockroach cert`](#create-certificates), use this secret name:
-
- {% include_cached copy-clipboard.html %}
- ~~~ yaml
- secret:
- secretName: cockroachdb.node
- ~~~
-
-1. The StatefulSet configuration deploys CockroachDB into the `default` namespace. To use a different namespace, search for `kind: RoleBinding` and change its `subjects.namespace` property to the name of the namespace. Otherwise, a `failed to read secrets` error occurs when you attempt to follow the steps in [Initialize the cluster](#initialize-the-cluster).
-
-{{site.data.alerts.callout_info}}
-By default, this manifest specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Configure the Cluster](configure-cockroachdb-kubernetes.html?filters=manual).
-{{site.data.alerts.end}}
-
-### Create certificates
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
-
-{% include {{ page.version.version }}/orchestration/kubernetes-cockroach-cert.md %}
-
-### Initialize the cluster
-
-1. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset.yaml
- ~~~
-
- ~~~
- serviceaccount/cockroachdb created
- role.rbac.authorization.k8s.io/cockroachdb created
- rolebinding.rbac.authorization.k8s.io/cockroachdb created
- service/cockroachdb-public created
- service/cockroachdb created
- poddisruptionbudget.policy/cockroachdb-budget created
- statefulset.apps/cockroachdb created
- ~~~
-
-1. Initialize the CockroachDB cluster:
-
- 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
- 1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pv
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m
- pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m
- pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m
- ~~~
-
- 1. Run `cockroach init` on one of the pods to complete the node startup process and have them join together as a cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-0 \
- -- /cockroach/cockroach init \
- --certs-dir=/cockroach/cockroach-certs
- ~~~
-
- ~~~
- Cluster successfully initialized
- ~~~
-
- 1. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 3m
- cockroachdb-1 1/1 Running 0 3m
- cockroachdb-2 1/1 Running 0 3m
- ~~~
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/start-kubernetes.md b/src/current/_includes/v22.1/orchestration/start-kubernetes.md
deleted file mode 100644
index 5168d470465..00000000000
--- a/src/current/_includes/v22.1/orchestration/start-kubernetes.md
+++ /dev/null
@@ -1,98 +0,0 @@
-You can use the hosted [Google Kubernetes Engine (GKE)](#hosted-gke) service or the hosted [Amazon Elastic Kubernetes Service (EKS)](#hosted-eks) to quickly start Kubernetes.
-
-{{site.data.alerts.callout_info}}
-GKE or EKS are not required to run CockroachDB on Kubernetes. A manual GCE or AWS cluster with the [minimum recommended Kubernetes version](#kubernetes-version) and at least 3 pods, each presenting [sufficient resources](#resources) to start a CockroachDB node, can also be used.
-{{site.data.alerts.end}}
-
-### Hosted GKE
-
-1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation.
-
- This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation.
-
- {{site.data.alerts.callout_success}}
- The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the DB Console using the steps in this guide.
- {{site.data.alerts.end}}
-
-2. From your local workstation, start the Kubernetes cluster, specifying one of the available [regions](https://cloud.google.com/compute/docs/regions-zones#available) (e.g., `us-east1`):
-
- {{site.data.alerts.callout_success}}
- Since this region can differ from your default `gcloud` region, be sure to include the `--region` flag to run `gcloud` commands against this cluster.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ gcloud container clusters create cockroachdb --machine-type n2-standard-4 --region {region-name} --num-nodes 1
- ~~~
-
- ~~~
- Creating cluster cockroachdb...done.
- ~~~
-
- This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--region` flag specifies a [regional three-zone cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster), and `--num-nodes` specifies one Kubernetes worker node in each zone.
-
- The `--machine-type` flag tells the node pool to use the [`n2-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations).
-
- The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster.
-
-3. Get the email address associated with your Google Cloud account:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ gcloud info | grep Account
- ~~~
-
- ~~~
- Account: [your.google.cloud.email@example.org]
- ~~~
-
- {{site.data.alerts.callout_danger}}
- This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com.
- {{site.data.alerts.end}}
-
-4. [Create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the address from the previous step:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create clusterrolebinding $USER-cluster-admin-binding \
- --clusterrole=cluster-admin \
- --user={your.google.cloud.email@example.org}
- ~~~
-
- ~~~
- clusterrolebinding.rbac.authorization.k8s.io/your.username-cluster-admin-binding created
- ~~~
-
-### Hosted EKS
-
-1. Complete the steps described in the [EKS Getting Started](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) documentation.
-
- This includes installing and configuring the AWS CLI and `eksctl`, which is the command-line tool used to create and delete Kubernetes clusters on EKS, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation.
-
- {{site.data.alerts.callout_info}}
- If you are running [EKS-Anywhere](https://aws.amazon.com/eks/eks-anywhere/), CockroachDB requires that you [configure your default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/) to auto-provision persistent volumes. Alternatively, you can define a custom storage configuration as required by your install pattern.
- {{site.data.alerts.end}}
-
-2. From your local workstation, start the Kubernetes cluster:
-
- {{site.data.alerts.callout_success}}
- To ensure that all 3 nodes can be placed into a different availability zone, you may want to first [confirm that at least 3 zones are available in the region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#availability-zones-describe) for your account.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ eksctl create cluster \
- --name cockroachdb \
- --nodegroup-name standard-workers \
- --node-type m5.xlarge \
- --nodes 3 \
- --nodes-min 1 \
- --nodes-max 4 \
- --node-ami auto
- ~~~
-
- This creates EKS instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--node-type` flag tells the node pool to use the [`m5.xlarge`](https://aws.amazon.com/ec2/instance-types/) instance type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations).
-
- Cluster provisioning usually takes between 10 and 15 minutes. Do not move on to the next step until you see a message like `[✔] EKS cluster "cockroachdb" in "us-east-1" region is ready` and details about your cluster.
-
-3. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/home) to verify that the stacks `eksctl-cockroachdb-cluster` and `eksctl-cockroachdb-nodegroup-standard-workers` were successfully created. Be sure that your region is selected in the console.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/orchestration/test-cluster-insecure.md b/src/current/_includes/v22.1/orchestration/test-cluster-insecure.md
deleted file mode 100644
index 285097f8e69..00000000000
--- a/src/current/_includes/v22.1/orchestration/test-cluster-insecure.md
+++ /dev/null
@@ -1,76 +0,0 @@
-1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=cockroachdb-public
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it \
- --image=cockroachdb/cockroach:{{page.release_info.version}} \
- --rm \
- --restart=Never \
- -- sql \
- --insecure \
- --host=my-release-cockroachdb-public
- ~~~
-
-
-
-2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- balance DECIMAL
- );
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts (balance)
- VALUES
- (1000.50), (20000), (380), (500), (55000);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- id | balance
- +--------------------------------------+---------+
- 6f123370-c48c-41ff-b384-2c185590af2b | 380
- 990c9148-1ea0-4861-9da7-fd0e65b0a7da | 1000.50
- ac31c671-40bf-4a7b-8bee-452cff8a4026 | 500
- d58afd93-5be9-42ba-b2e2-dc00dcedf409 | 20000
- e6d8f696-87f5-4d3c-a377-8e152fdc27f7 | 55000
- (5 rows)
- ~~~
-
-3. Exit the SQL shell and delete the temporary pod:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
diff --git a/src/current/_includes/v22.1/orchestration/test-cluster-secure.md b/src/current/_includes/v22.1/orchestration/test-cluster-secure.md
deleted file mode 100644
index 8e72dd5b893..00000000000
--- a/src/current/_includes/v22.1/orchestration/test-cluster-secure.md
+++ /dev/null
@@ -1,144 +0,0 @@
-To use the CockroachDB SQL client, first launch a secure pod running the `cockroach` binary.
-
-
-
-{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %}
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ kubectl create \
--f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/examples/client-secure-operator.yaml
-~~~
-
-1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- ~~~
- # Welcome to the CockroachDB SQL shell.
- # All statements must be terminated by a semicolon.
- # To exit, type: \q.
- #
- # Server version: CockroachDB CCL v21.1.0 (x86_64-unknown-linux-gnu, built 2021/04/23 13:54:57, go1.13.14) (same version as client)
- # Cluster ID: a96791d9-998c-4683-a3d3-edbf425bbf11
- #
- # Enter \? for a brief introduction.
- #
- root@cockroachdb-public:26257/defaultdb>
- ~~~
-
-{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %}
-
-
-
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ kubectl create \
--f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/client.yaml
-~~~
-
-~~~
-pod/cockroachdb-client-secure created
-~~~
-
-1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- ~~~
- # Welcome to the cockroach SQL interface.
- # All statements must be terminated by a semicolon.
- # To exit: CTRL + D.
- #
- # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
- # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
-
- # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4
- #
- # Enter \? for a brief introduction.
- #
- root@cockroachdb-public:26257/defaultdb>
- ~~~
-
- {{site.data.alerts.callout_success}}
- This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command.
-
- If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`.
- {{site.data.alerts.end}}
-
-{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %}
-
-
-
-From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/examples/client-secure.yaml) file to launch a pod and keep it running indefinitely.
-
-1. Download the file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl -OOOOOOOOO \
- https://raw.githubusercontent.com/cockroachdb/helm-charts/master/examples/client-secure.yaml
- ~~~
-
-1. In the file, set the following values:
- - `spec.serviceAccountName: my-release-cockroachdb`
- - `spec.image: cockroachdb/cockroach: {your CockroachDB version}`
- - `spec.volumes[0].project.sources[0].secret.name: my-release-cockroachdb-client-secret`
-
-1. Use the file to launch a pod and keep it running indefinitely:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f client-secure.yaml
- ~~~
-
- ~~~
- pod "cockroachdb-client-secure" created
- ~~~
-
-1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=./cockroach-certs \
- --host=my-release-cockroachdb-public
- ~~~
-
- ~~~
- # Welcome to the cockroach SQL interface.
- # All statements must be terminated by a semicolon.
- # To exit: CTRL + D.
- #
- # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
- # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
-
- # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4
- #
- # Enter \? for a brief introduction.
- #
- root@my-release-cockroachdb-public:26257/defaultdb>
- ~~~
-
- {{site.data.alerts.callout_success}}
- This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command.
-
- If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`.
- {{site.data.alerts.end}}
-
-{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %}
-
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/performance/alter-primary-key-hash-sharded.md b/src/current/_includes/v22.1/performance/alter-primary-key-hash-sharded.md
deleted file mode 100644
index 7aac175286e..00000000000
--- a/src/current/_includes/v22.1/performance/alter-primary-key-hash-sharded.md
+++ /dev/null
@@ -1,66 +0,0 @@
-Let's assume the `events` table already exists:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE events (
- product_id INT8,
- owner UUID,
- serial_number VARCHAR,
- event_id UUID,
- ts TIMESTAMP,
- data JSONB,
- PRIMARY KEY (product_id, owner, serial_number, ts, event_id),
- INDEX (ts) USING HASH
-);
-~~~
-
-You can change an existing primary key to use hash sharding by adding the `USING HASH` clause at the end of the key definition:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE events ALTER PRIMARY KEY USING COLUMNS (product_id, owner, serial_number, ts, event_id) USING HASH;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SHOW INDEX FROM events;
-~~~
-
-~~~
- table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit
--------------+---------------+------------+--------------+-------------------------------------------------------------------+-----------+---------+-----------
- events | events_pkey | false | 1 | crdb_internal_event_id_owner_product_id_serial_number_ts_shard_16 | ASC | false | true
- events | events_pkey | false | 2 | product_id | ASC | false | false
- events | events_pkey | false | 3 | owner | ASC | false | false
- events | events_pkey | false | 4 | serial_number | ASC | false | false
- events | events_pkey | false | 5 | ts | ASC | false | false
- events | events_pkey | false | 6 | event_id | ASC | false | false
- events | events_pkey | false | 7 | data | N/A | true | false
- events | events_ts_idx | true | 1 | crdb_internal_ts_shard_16 | ASC | false | true
- events | events_ts_idx | true | 2 | ts | ASC | false | false
- events | events_ts_idx | true | 3 | crdb_internal_event_id_owner_product_id_serial_number_ts_shard_16 | ASC | false | true
- events | events_ts_idx | true | 4 | product_id | ASC | false | true
- events | events_ts_idx | true | 5 | owner | ASC | false | true
- events | events_ts_idx | true | 6 | serial_number | ASC | false | true
- events | events_ts_idx | true | 7 | event_id | ASC | false | true
-(14 rows)
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM events;
-~~~
-
-~~~
- column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden
---------------------------------------------------------------------+-----------+-------------+----------------+-----------------------------------------------------------------------------------------------+-----------------------------+------------
- product_id | INT8 | false | NULL | | {events_pkey,events_ts_idx} | false
- owner | UUID | false | NULL | | {events_pkey,events_ts_idx} | false
- serial_number | VARCHAR | false | NULL | | {events_pkey,events_ts_idx} | false
- event_id | UUID | false | NULL | | {events_pkey,events_ts_idx} | false
- ts | TIMESTAMP | false | NULL | | {events_pkey,events_ts_idx} | false
- data | JSONB | true | NULL | | {events_pkey} | false
- crdb_internal_ts_shard_16 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16) | {events_ts_idx} | true
- crdb_internal_event_id_owner_product_id_serial_number_ts_shard_16 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(event_id, owner, product_id, serial_number, ts)), 16) | {events_pkey,events_ts_idx} | true
-(8 rows)
-~~~
diff --git a/src/current/_includes/v22.1/performance/check-rebalancing-after-partitioning.md b/src/current/_includes/v22.1/performance/check-rebalancing-after-partitioning.md
deleted file mode 100644
index b26d29b8631..00000000000
--- a/src/current/_includes/v22.1/performance/check-rebalancing-after-partitioning.md
+++ /dev/null
@@ -1,41 +0,0 @@
-Over the next minutes, CockroachDB will rebalance all partitions based on the constraints you defined.
-
-To check this at a high level, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is still close to even across all nodes but much higher than before partitioning:
-
-
-
-To check at a more granular level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement on the `vehicles` table:
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
-{{page.certs}} \
---host= \
---database=movr \
---execute="SELECT * FROM \
-[SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles] \
-WHERE \"start_key\" IS NOT NULL \
- AND \"start_key\" NOT LIKE '%Prefix%';"
-~~~
-
-~~~
- start_key | end_key | range_id | replicas | lease_holder
-+------------------+----------------------------+----------+----------+--------------+
- /"boston" | /"boston"/PrefixEnd | 105 | {1,2,3} | 3
- /"los angeles" | /"los angeles"/PrefixEnd | 121 | {7,8,9} | 8
- /"new york" | /"new york"/PrefixEnd | 101 | {1,2,3} | 3
- /"san francisco" | /"san francisco"/PrefixEnd | 117 | {7,8,9} | 8
- /"seattle" | /"seattle"/PrefixEnd | 113 | {4,5,6} | 5
- /"washington dc" | /"washington dc"/PrefixEnd | 109 | {1,2,3} | 1
-(6 rows)
-~~~
-
-For reference, here's how the nodes map to zones:
-
-Node IDs | Zone
----------|-----
-1-3 | `us-east1-b` (South Carolina)
-4-6 | `us-west1-a` (Oregon)
-7-9 | `us-west2-a` (Los Angeles)
-
-We can see that, after partitioning, the replicas for New York, Boston, and Washington DC are located on nodes 1-3 in `us-east1-b`, replicas for Seattle are located on nodes 4-6 in `us-west1-a`, and replicas for San Francisco and Los Angeles are located on nodes 7-9 in `us-west2-a`.
diff --git a/src/current/_includes/v22.1/performance/check-rebalancing.md b/src/current/_includes/v22.1/performance/check-rebalancing.md
deleted file mode 100644
index 3109150fdaf..00000000000
--- a/src/current/_includes/v22.1/performance/check-rebalancing.md
+++ /dev/null
@@ -1,33 +0,0 @@
-Since you started each node with the `--locality` flag set to its GCE zone, over the next minutes, CockroachDB will rebalance data evenly across the zones.
-
-To check this, access the DB Console on any node at `:8080` and look at the **Node List**. You'll see that the range count is more or less even across all nodes:
-
-
-
-For reference, here's how the nodes map to zones:
-
-Node IDs | Zone
----------|-----
-1-3 | `us-east1-b` (South Carolina)
-4-6 | `us-west1-a` (Oregon)
-7-9 | `us-west2-a` (Los Angeles)
-
-To verify even balancing at range level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement:
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
-{{page.certs}} \
---host= \
---database=movr \
---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles;"
-~~~
-
-~~~
- start_key | end_key | range_id | replicas | lease_holder
-+-----------+---------+----------+----------+--------------+
- NULL | NULL | 33 | {3,4,7} | 7
-(1 row)
-~~~
-
-In this case, we can see that, for the single range containing `vehicles` data, one replica is in each zone, and the leaseholder is in the `us-west2-a` zone.
diff --git a/src/current/_includes/v22.1/performance/configure-network.md b/src/current/_includes/v22.1/performance/configure-network.md
deleted file mode 100644
index e9abeb94df3..00000000000
--- a/src/current/_includes/v22.1/performance/configure-network.md
+++ /dev/null
@@ -1,18 +0,0 @@
-CockroachDB requires TCP communication on two ports:
-
-- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster)
-- **8080** (`tcp:8080`) for accessing the DB Console
-
-Since GCE instances communicate on their internal IP addresses by default, you do not need to take any action to enable inter-node communication. However, to access the DB Console from your local network, you must [create a firewall rule for your project](https://cloud.google.com/vpc/docs/using-firewalls):
-
-Field | Recommended Value
-------|------------------
-Name | **cockroachweb**
-Source filter | IP ranges
-Source IP ranges | Your local network's IP ranges
-Allowed protocols | **tcp:8080**
-Target tags | `cockroachdb`
-
-{{site.data.alerts.callout_info}}
-The **tag** feature will let you easily apply the rule to your instances.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/performance/contention-indicators.md b/src/current/_includes/v22.1/performance/contention-indicators.md
deleted file mode 100644
index 41508c2310d..00000000000
--- a/src/current/_includes/v22.1/performance/contention-indicators.md
+++ /dev/null
@@ -1,4 +0,0 @@
-* Your application is experiencing degraded performance with transaction errors like `SQLSTATE: 40001`, `RETRY_WRITE_TOO_OLD`, and `RETRY_SERIALIZABLE`. See [Transaction Retry Error Reference](transaction-retry-error-reference.html).
-* The [SQL Statement Contention graph](ui-sql-dashboard.html#sql-statement-contention) is showing spikes over time.
-
-* The [Transaction Restarts graph](ui-sql-dashboard.html) is showing spikes in retries over time.
diff --git a/src/current/_includes/v22.1/performance/create-index-hash-sharded-secondary-index.md b/src/current/_includes/v22.1/performance/create-index-hash-sharded-secondary-index.md
deleted file mode 100644
index 05f66896541..00000000000
--- a/src/current/_includes/v22.1/performance/create-index-hash-sharded-secondary-index.md
+++ /dev/null
@@ -1,62 +0,0 @@
-Let's assume the `events` table already exists:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE events (
- product_id INT8,
- owner UUID,
- serial_number VARCHAR,
- event_id UUID,
- ts TIMESTAMP,
- data JSONB,
- PRIMARY KEY (product_id, owner, serial_number, ts, event_id)
-);
-~~~
-
-You can create a hash-sharded index on an existing table:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE INDEX ON events(ts) USING HASH;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SHOW INDEX FROM events;
-~~~
-
-~~~
- table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit
--------------+---------------+------------+--------------+---------------------------+-----------+---------+-----------
- events | events_pkey | false | 1 | product_id | ASC | false | false
- events | events_pkey | false | 2 | owner | ASC | false | false
- events | events_pkey | false | 3 | serial_number | ASC | false | false
- events | events_pkey | false | 4 | ts | ASC | false | false
- events | events_pkey | false | 5 | event_id | ASC | false | false
- events | events_pkey | false | 6 | data | N/A | true | false
- events | events_ts_idx | true | 1 | crdb_internal_ts_shard_16 | ASC | false | true
- events | events_ts_idx | true | 2 | ts | ASC | false | false
- events | events_ts_idx | true | 3 | product_id | ASC | false | true
- events | events_ts_idx | true | 4 | owner | ASC | false | true
- events | events_ts_idx | true | 5 | serial_number | ASC | false | true
- events | events_ts_idx | true | 6 | event_id | ASC | false | true
-(12 rows)
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM events;
-~~~
-
-~~~
- column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden
-----------------------------+-----------+-------------+----------------+---------------------------------------------------+-----------------------------+------------
- product_id | INT8 | false | NULL | | {events_pkey,events_ts_idx} | false
- owner | UUID | false | NULL | | {events_pkey,events_ts_idx} | false
- serial_number | VARCHAR | false | NULL | | {events_pkey,events_ts_idx} | false
- event_id | UUID | false | NULL | | {events_pkey,events_ts_idx} | false
- ts | TIMESTAMP | false | NULL | | {events_pkey,events_ts_idx} | false
- data | JSONB | true | NULL | | {events_pkey} | false
- crdb_internal_ts_shard_16 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16) | {events_ts_idx} | true
-(7 rows)
-~~~
diff --git a/src/current/_includes/v22.1/performance/create-table-hash-sharded-primary-index.md b/src/current/_includes/v22.1/performance/create-table-hash-sharded-primary-index.md
deleted file mode 100644
index 40ba79a096a..00000000000
--- a/src/current/_includes/v22.1/performance/create-table-hash-sharded-primary-index.md
+++ /dev/null
@@ -1,37 +0,0 @@
-Let's create the `products` table and add a hash-sharded primary key on the `ts` column:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE products (
- ts DECIMAL PRIMARY KEY USING HASH,
- product_id INT8
- );
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SHOW INDEX FROM products;
-~~~
-
-~~~
- table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit
--------------+---------------+------------+--------------+---------------------------+-----------+---------+-----------
- products | products_pkey | false | 1 | crdb_internal_ts_shard_16 | ASC | false | true
- products | products_pkey | false | 2 | ts | ASC | false | false
- products | products_pkey | false | 3 | product_id | N/A | true | false
-(3 rows)
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM products;
-~~~
-
-~~~
- column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden
-----------------------------+-----------+-------------+----------------+---------------------------------------------------+-----------------+------------
- crdb_internal_ts_shard_16 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16) | {products_pkey} | true
- ts | DECIMAL | false | NULL | | {products_pkey} | false
- product_id | INT8 | true | NULL | | {products_pkey} | false
-(3 rows)
-~~~
diff --git a/src/current/_includes/v22.1/performance/create-table-hash-sharded-secondary-index.md b/src/current/_includes/v22.1/performance/create-table-hash-sharded-secondary-index.md
deleted file mode 100644
index dc0e164a0fb..00000000000
--- a/src/current/_includes/v22.1/performance/create-table-hash-sharded-secondary-index.md
+++ /dev/null
@@ -1,56 +0,0 @@
-Let's now create the `events` table and add a secondary index on the `ts` column in a single statement:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE events (
- product_id INT8,
- owner UUID,
- serial_number VARCHAR,
- event_id UUID,
- ts TIMESTAMP,
- data JSONB,
- PRIMARY KEY (product_id, owner, serial_number, ts, event_id),
- INDEX (ts) USING HASH
-);
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SHOW INDEX FROM events;
-~~~
-
-~~~
- table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit
--------------+---------------+------------+--------------+---------------------------+-----------+---------+-----------
- events | events_pkey | false | 1 | product_id | ASC | false | false
- events | events_pkey | false | 2 | owner | ASC | false | false
- events | events_pkey | false | 3 | serial_number | ASC | false | false
- events | events_pkey | false | 4 | ts | ASC | false | false
- events | events_pkey | false | 5 | event_id | ASC | false | false
- events | events_pkey | false | 6 | data | N/A | true | false
- events | events_ts_idx | true | 1 | crdb_internal_ts_shard_16 | ASC | false | true
- events | events_ts_idx | true | 2 | ts | ASC | false | false
- events | events_ts_idx | true | 3 | product_id | ASC | false | true
- events | events_ts_idx | true | 4 | owner | ASC | false | true
- events | events_ts_idx | true | 5 | serial_number | ASC | false | true
- events | events_ts_idx | true | 6 | event_id | ASC | false | true
-(12 rows)
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM events;
-~~~
-
-~~~
- column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden
-----------------------------+-----------+-------------+----------------+---------------------------------------------------+-----------------------------+------------
- product_id | INT8 | false | NULL | | {events_pkey,events_ts_idx} | false
- owner | UUID | false | NULL | | {events_pkey,events_ts_idx} | false
- serial_number | VARCHAR | false | NULL | | {events_pkey,events_ts_idx} | false
- event_id | UUID | false | NULL | | {events_pkey,events_ts_idx} | false
- ts | TIMESTAMP | false | NULL | | {events_pkey,events_ts_idx} | false
- data | JSONB | true | NULL | | {events_pkey} | false
- crdb_internal_ts_shard_16 | INT8 | false | NULL | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16) | {events_ts_idx} | true
-(7 rows)
-~~~
diff --git a/src/current/_includes/v22.1/performance/import-movr.md b/src/current/_includes/v22.1/performance/import-movr.md
deleted file mode 100644
index c61a32f64ce..00000000000
--- a/src/current/_includes/v22.1/performance/import-movr.md
+++ /dev/null
@@ -1,160 +0,0 @@
-Now you'll import Movr data representing users, vehicles, and rides in 3 eastern US cities (New York, Boston, and Washington DC) and 3 western US cities (Los Angeles, San Francisco, and Seattle).
-
-1. Still on the fourth instance, start the [built-in SQL shell](cockroach-sql.html), pointing it at one of the CockroachDB nodes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql {{page.certs}} --host=
- ~~~
-
-2. Create the `movr` database and set it as the default:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE movr;
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SET DATABASE = movr;
- ~~~
-
-3. Use the [`IMPORT`](import.html) statement to create and populate the `users`, `vehicles,` and `rides` tables:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > IMPORT TABLE users (
- id UUID NOT NULL,
- city STRING NOT NULL,
- name STRING NULL,
- address STRING NULL,
- credit_card STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC)
- )
- CSV DATA (
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/users/n1.0.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+------+---------------+----------------+--------+
- 390345990764396545 | succeeded | 1 | 1998 | 0 | 0 | 241052
- (1 row)
-
- Time: 2.882582355s
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > IMPORT TABLE vehicles (
- id UUID NOT NULL,
- city STRING NOT NULL,
- type STRING NULL,
- owner_id UUID NULL,
- creation_time TIMESTAMP NULL,
- status STRING NULL,
- ext JSON NULL,
- mycol STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC)
- )
- CSV DATA (
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/vehicles/n1.0.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+-------+---------------+----------------+---------+
- 390346109887250433 | succeeded | 1 | 19998 | 19998 | 0 | 3558767
- (1 row)
-
- Time: 5.803841493s
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > IMPORT TABLE rides (
- id UUID NOT NULL,
- city STRING NOT NULL,
- vehicle_city STRING NULL,
- rider_id UUID NULL,
- vehicle_id UUID NULL,
- start_address STRING NULL,
- end_address STRING NULL,
- start_time TIMESTAMP NULL,
- end_time TIMESTAMP NULL,
- revenue DECIMAL(10,2) NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC),
- INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC),
- CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city)
- )
- CSV DATA (
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.0.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.1.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.2.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.3.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.4.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.5.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.6.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.7.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.8.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.9.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+--------+---------------+----------------+-----------+
- 390346325693792257 | succeeded | 1 | 999996 | 1999992 | 0 | 339741841
- (1 row)
-
- Time: 44.620371424s
- ~~~
-
- {{site.data.alerts.callout_success}}
- You can observe the progress of imports as well as all schema change operations (e.g., adding secondary indexes) on the [**Jobs** page](ui-jobs-page.html) of the DB Console.
- {{site.data.alerts.end}}
-
-7. Logically, there should be a number of [foreign key](foreign-key.html) relationships between the tables:
-
- Referencing columns | Referenced columns
- --------------------|-------------------
- `vehicles.city`, `vehicles.owner_id` | `users.city`, `users.id`
- `rides.city`, `rides.rider_id` | `users.city`, `users.id`
- `rides.vehicle_city`, `rides.vehicle_id` | `vehicles.city`, `vehicles.id`
-
- As mentioned earlier, it wasn't possible to put these relationships in place during `IMPORT`, but it was possible to create the required secondary indexes. Now, let's add the foreign key constraints:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE vehicles
- ADD CONSTRAINT fk_city_ref_users
- FOREIGN KEY (city, owner_id)
- REFERENCES users (city, id);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE rides
- ADD CONSTRAINT fk_city_ref_users
- FOREIGN KEY (city, rider_id)
- REFERENCES users (city, id);
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE rides
- ADD CONSTRAINT fk_vehicle_city_ref_vehicles
- FOREIGN KEY (vehicle_city, vehicle_id)
- REFERENCES vehicles (city, id);
- ~~~
-
-4. Exit the built-in SQL shell:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
diff --git a/src/current/_includes/v22.1/performance/lease-preference-system-database.md b/src/current/_includes/v22.1/performance/lease-preference-system-database.md
deleted file mode 100644
index 23c4376fbf0..00000000000
--- a/src/current/_includes/v22.1/performance/lease-preference-system-database.md
+++ /dev/null
@@ -1,10 +0,0 @@
-To reduce latency while making {% if page.name == "online-schema-changes.md" %}online schema changes{% else %}[online schema changes](online-schema-changes.html){% endif %}, we recommend specifying a `lease_preference` [zone configuration](configure-replication-zones.html) on the `system` database to a single region and running all subsequent schema changes from a node within that region. For example, if the majority of online schema changes come from machines that are geographically close to `us-east1`, run the following:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-ALTER DATABASE system CONFIGURE ZONE USING constraints = '{"+region=us-east1": 1}', lease_preferences = '[[+region=us-east1]]';
-~~~
-
-Run all subsequent schema changes from a node in the specified region.
-
-If you do not intend to run more schema changes from that region, you can safely remove the lease preference from the zone configuration for the system database.
diff --git a/src/current/_includes/v22.1/performance/overview.md b/src/current/_includes/v22.1/performance/overview.md
deleted file mode 100644
index 195f8ee330f..00000000000
--- a/src/current/_includes/v22.1/performance/overview.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### Topology
-
-You'll start with a 3-node CockroachDB cluster in a single Google Compute Engine (GCE) zone, with an extra instance for running a client application workload:
-
-
-
-{{site.data.alerts.callout_info}}
-Within a single GCE zone, network latency between instances should be sub-millisecond.
-{{site.data.alerts.end}}
-
-You'll then scale the cluster to 9 nodes running across 3 GCE regions, with an extra instance in each region for a client application workload:
-
-
-
-{{site.data.alerts.callout_info}}
-Network latencies will increase with geographic distance between nodes. You can observe this in the [Network Latency page](ui-network-latency-page.html) of the DB Console.
-{{site.data.alerts.end}}
-
-To reproduce the performance demonstrated in this tutorial:
-
-- For each CockroachDB node, you'll use the [`n2-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 16 GB memory) with the Ubuntu 16.04 OS image and a [local SSD](https://cloud.google.com/compute/docs/disks/#localssds) disk.
-- For running the client application workload, you'll use smaller instances, such as `n2-standard-2`.
-
-### Schema
-
-Your schema and data will be based on our open-source, fictional peer-to-peer vehicle-sharing application, [MovR](movr.html).
-
-
-
-A few notes about the schema:
-
-- There are just three self-explanatory tables: In essence, `users` represents the people registered for the service, `vehicles` represents the pool of vehicles for the service, and `rides` represents when and where users have participated.
-- Each table has a composite primary key, with `city` being first in the key. Although not necessary initially in the single-region deployment, once you scale the cluster to multiple regions, these compound primary keys will enable you to [geo-partition data at the row level](partitioning.html#partition-using-primary-key) by `city`. As such, this tutorial demonstrates a schema designed for future scaling.
-- The [`IMPORT`](import.html) feature you'll use to import the data does not support foreign keys, so you'll import the data without [foreign key constraints](foreign-key.html). However, the import will create the secondary indexes required to add the foreign keys later.
-
-### Important concepts
-
-To understand the techniques in this tutorial, and to be able to apply them in your own scenarios, it's important to first understand [how reads and writes work in CockroachDB](architecture/reads-and-writes-overview.html). Review that document before getting started here.
diff --git a/src/current/_includes/v22.1/performance/partition-by-city.md b/src/current/_includes/v22.1/performance/partition-by-city.md
deleted file mode 100644
index 2634a204d33..00000000000
--- a/src/current/_includes/v22.1/performance/partition-by-city.md
+++ /dev/null
@@ -1,419 +0,0 @@
-For this service, the most effective technique for improving read and write latency is to [geo-partition](partitioning.html) the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges. Once ranges are defined in this way, we can then use the [replication zone](configure-replication-zones.html) feature to pin partitions to specific locations, ensuring that read and write requests from users in a specific city do not have to leave that region.
-
-1. Partitioning is an enterprise feature, so start off by [registering for a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/).
-
-2. Once you've received the trial license, SSH to any node in your cluster and [apply the license](licensing-faqs.html#set-a-license):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --host= \
- --execute="SET CLUSTER SETTING cluster.organization = '';"
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --host= \
- --execute="SET CLUSTER SETTING enterprise.license = '';"
- ~~~
-
-3. Define partitions for all tables and their secondary indexes.
-
- Start with the `users` table:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER TABLE users \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- Now define partitions for the `vehicles` table and its secondary indexes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER TABLE vehicles \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER INDEX vehicles_auto_index_fk_city_ref_users \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- Next, define partitions for the `rides` table and its secondary indexes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER TABLE rides \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER INDEX rides_auto_index_fk_city_ref_users \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER INDEX rides_auto_index_fk_vehicle_city_ref_vehicles \
- PARTITION BY LIST (vehicle_city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- Finally, drop an unused index on `rides` rather than partition it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="DROP INDEX rides_start_time_idx;"
- ~~~
-
- {{site.data.alerts.callout_info}}
- The `rides` table contains 1 million rows, so dropping this index will take a few minutes.
- {{site.data.alerts.end}}
-
-7. Now [create replication zones](configure-replication-zones.html#create-a-replication-zone-for-a-partition) to require city data to be stored on specific nodes based on node locality.
-
- City | Locality
- -----|---------
- New York | `zone=us-east1-b`
- Boston | `zone=us-east1-b`
- Washington DC | `zone=us-east1-b`
- Seattle | `zone=us-west1-a`
- San Francisco | `zone=us-west2-a`
- Los Angeles | `zone=us-west2-a`
-
- {{site.data.alerts.callout_info}}
- Since our nodes are located in 3 specific GCE zones, we're only going to use the `zone=` portion of node locality. If we were using multiple zones per regions, we would likely use the `region=` portion of the node locality instead.
- {{site.data.alerts.end}}
-
- Start with the `users` table partitions:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- Move on to the `vehicles` table and secondary index partitions:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- Finish with the `rides` table and secondary index partitions:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
diff --git a/src/current/_includes/v22.1/performance/scale-cluster.md b/src/current/_includes/v22.1/performance/scale-cluster.md
deleted file mode 100644
index 6c368d663de..00000000000
--- a/src/current/_includes/v22.1/performance/scale-cluster.md
+++ /dev/null
@@ -1,61 +0,0 @@
-1. SSH to one of the `n2-standard-4` instances in the `us-west1-a` zone.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-3. Run the [`cockroach start`](cockroach-start.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- {{page.certs}} \
- --advertise-host= \
- --join= \
- --locality=cloud=gce,region=us-west1,zone=us-west1-a \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-4. Repeat steps 1 - 3 for the other two `n2-standard-4` instances in the `us-west1-a` zone.
-
-5. SSH to one of the `n2-standard-4` instances in the `us-west2-a` zone.
-
-6. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-7. Run the [`cockroach start`](cockroach-start.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- {{page.certs}} \
- --advertise-host= \
- --join= \
- --locality=cloud=gce,region=us-west2,zone=us-west2-a \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-8. Repeat steps 5 - 7 for the other two `n2-standard-4` instances in the `us-west2-a` zone.
diff --git a/src/current/_includes/v22.1/performance/start-cluster.md b/src/current/_includes/v22.1/performance/start-cluster.md
deleted file mode 100644
index ee1d71149a7..00000000000
--- a/src/current/_includes/v22.1/performance/start-cluster.md
+++ /dev/null
@@ -1,60 +0,0 @@
-#### Start the nodes
-
-1. SSH to the first `n2-standard-4` instance.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-3. Run the [`cockroach start`](cockroach-start.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- {{page.certs}} \
- --advertise-host= \
- --join=:26257,:26257,:26257 \
- --locality=cloud=gce,region=us-east1,zone=us-east1-b \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-4. Repeat steps 1 - 3 for the other two `n2-standard-4` instances. Be sure to adjust the `--advertise-addr` flag each time.
-
-#### Initialize the cluster
-
-1. SSH to the fourth instance, the one not running a CockroachDB node.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-4. Run the [`cockroach init`](cockroach-init.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach init {{page.certs}} --host=
- ~~~
-
- Each node then prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients.
diff --git a/src/current/_includes/v22.1/performance/statement-contention.md b/src/current/_includes/v22.1/performance/statement-contention.md
deleted file mode 100644
index 059d05ea2c4..00000000000
--- a/src/current/_includes/v22.1/performance/statement-contention.md
+++ /dev/null
@@ -1,14 +0,0 @@
-Find the transactions and statements within the transactions that are experiencing [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). CockroachDB has several tools to help you track down such transactions and statements:
-
-* In DB Console, visit the [Transactions](ui-transactions-page.html) and [Statements](ui-statements-page.html) pages and sort transactions and statements by contention.
-* Query the following tables:
-
- - [`crdb_internal.cluster_contended_indexes`](crdb-internal.html#cluster_contended_indexes) and [`crdb_internal.cluster_contended_tables`](crdb-internal.html#cluster_contended_tables) tables for your database to find the indexes and tables that are experiencing contention.
- - [`crdb_internal.cluster_locks`](crdb-internal.html#cluster_locks) to find out which transactions are holding locks on which objects.
- - [`crdb_internal.cluster_contention_events`](crdb-internal.html#view-the-tables-indexes-with-the-most-time-under-contention) to view the tables/indexes with the most time under contention.
-
-After you identify the transactions or statements that are causing contention, follow the steps in the next section [to avoid contention](performance-best-practices-overview.html#avoid-transaction-contention).
-
-{{site.data.alerts.callout_info}}
-If you experience a hanging or stuck query that is not showing up in the list of contended transactions and statements on the [Transactions](ui-transactions-page.html) or [Statements](ui-statements-page.html) pages in the DB Console, the process described above will not work. You will need to follow the process described in [Hanging or stuck queries](query-behavior-troubleshooting.html#hanging-or-stuck-queries) instead.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/performance/test-performance-after-partitioning.md b/src/current/_includes/v22.1/performance/test-performance-after-partitioning.md
deleted file mode 100644
index 9754f6d9cd1..00000000000
--- a/src/current/_includes/v22.1/performance/test-performance-after-partitioning.md
+++ /dev/null
@@ -1,93 +0,0 @@
-After partitioning, reads and writers for a specific city will be much faster because all replicas for that city are now located on the nodes closest to the city.
-
-To check this, let's repeat a few of the read and write queries that we executed before partitioning in [step 12](#step-12-test-performance).
-
-#### Reads
-
-Again imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use:
-
-1. SSH to the instance in `us-east1-b` with the Python client.
-
-2. Query for the data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ {{page.app}} \
- --host= \
- --statement="SELECT id, ext FROM vehicles \
- WHERE city = 'new york' \
- AND type = 'bike' \
- AND status = 'in_use'" \
- --repeat=50 \
- --times
- ~~~
-
- ~~~
- Result:
- ['id', 'ext']
- ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"]
- ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"]
- ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"]
- ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"]
- ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"]
- ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"]
- ...
-
- Times (milliseconds):
- [20.065784454345703, 7.866144180297852, 8.362054824829102, 9.08803939819336, 7.925987243652344, 7.543087005615234, 7.786035537719727, 8.227825164794922, 7.907867431640625, 7.654905319213867, 7.793903350830078, 7.627964019775391, 7.833957672119141, 7.858037948608398, 7.474184036254883, 9.459972381591797, 7.726192474365234, 7.194995880126953, 7.364034652709961, 7.25102424621582, 7.650852203369141, 7.663965225219727, 9.334087371826172, 7.810115814208984, 7.543087005615234, 7.134914398193359, 7.922887802124023, 7.220029830932617, 7.606029510498047, 7.208108901977539, 7.333993911743164, 7.464170455932617, 7.679939270019531, 7.436990737915039, 7.62486457824707, 7.235050201416016, 7.420063018798828, 7.795095443725586, 7.39598274230957, 7.546901702880859, 7.582187652587891, 7.9669952392578125, 7.418155670166016, 7.539033889770508, 7.805109024047852, 7.086992263793945, 7.069826126098633, 7.833957672119141, 7.43412971496582, 7.035017013549805]
-
- Median time (milliseconds):
- 7.62641429901
- ~~~
-
-Before partitioning, this query took a median time of 72.02ms. After partitioning, the query took a median time of only 7.62ms.
-
-#### Writes
-
-Now let's again imagine 100 people in New York and 100 people in Seattle and 100 people in New York want to create new Movr accounts:
-
-1. SSH to the instance in `us-west1-a` with the Python client.
-
-2. Create 100 Seattle-based users:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [41.8248176574707, 9.701967239379883, 8.725166320800781, 9.058952331542969, 7.819175720214844, 6.247997283935547, 10.265827178955078, 7.627964019775391, 9.120941162109375, 7.977008819580078, 9.247064590454102, 8.929967880249023, 9.610176086425781, 14.40286636352539, 8.588075637817383, 8.67319107055664, 9.417057037353516, 7.652044296264648, 8.917093276977539, 9.135961532592773, 8.604049682617188, 9.220123291015625, 7.578134536743164, 9.096860885620117, 8.942842483520508, 8.63790512084961, 7.722139358520508, 13.59701156616211, 9.176015853881836, 11.484146118164062, 9.212017059326172, 7.563114166259766, 8.793115615844727, 8.80289077758789, 7.827043533325195, 7.6389312744140625, 17.47584342956543, 9.436845779418945, 7.63392448425293, 8.594989776611328, 9.002208709716797, 8.93402099609375, 8.71896743774414, 8.76307487487793, 8.156061172485352, 8.729934692382812, 8.738040924072266, 8.25190544128418, 8.971929550170898, 7.460832595825195, 8.889198303222656, 8.45789909362793, 8.761167526245117, 10.223865509033203, 8.892059326171875, 8.961915969848633, 8.968114852905273, 7.750988006591797, 7.761955261230469, 9.199142456054688, 9.02700424194336, 9.509086608886719, 9.428977966308594, 7.902860641479492, 8.940935134887695, 8.615970611572266, 8.75401496887207, 7.906913757324219, 8.179187774658203, 11.447906494140625, 8.71419906616211, 9.202003479003906, 9.263038635253906, 9.089946746826172, 8.92496109008789, 10.32114028930664, 7.913827896118164, 9.464025497436523, 10.612010955810547, 8.78596305847168, 8.878946304321289, 7.575035095214844, 10.657072067260742, 8.777856826782227, 8.649110794067383, 9.012937545776367, 8.931875228881836, 9.31406021118164, 9.396076202392578, 8.908987045288086, 8.002996444702148, 9.089946746826172, 7.5588226318359375, 8.918046951293945, 12.117862701416016, 7.266998291015625, 8.074045181274414, 8.955001831054688, 8.868932723999023, 8.755922317504883]
-
- Median time (milliseconds):
- 8.90052318573
- ~~~
-
- Before partitioning, this query took a median time of 48.40ms. After partitioning, the query took a median time of only 8.90ms.
-
-3. SSH to the instance in `us-east1-b` with the Python client.
-
-4. Create 100 new NY-based users:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [276.3068675994873, 9.830951690673828, 8.772134780883789, 9.304046630859375, 8.24880599975586, 7.959842681884766, 7.848978042602539, 7.879018783569336, 7.754087448120117, 10.724067687988281, 13.960123062133789, 9.825944900512695, 9.60993766784668, 9.273052215576172, 9.41920280456543, 8.040904998779297, 16.484975814819336, 10.178089141845703, 8.322000503540039, 9.468793869018555, 8.002042770385742, 9.185075759887695, 9.54294204711914, 9.387016296386719, 9.676933288574219, 13.051986694335938, 9.506940841674805, 12.327909469604492, 10.377168655395508, 15.023946762084961, 9.985923767089844, 7.853031158447266, 9.43303108215332, 9.164094924926758, 10.941028594970703, 9.37199592590332, 12.359857559204102, 8.975028991699219, 7.728099822998047, 8.310079574584961, 9.792089462280273, 9.448051452636719, 8.057117462158203, 9.37795639038086, 9.753942489624023, 9.576082229614258, 8.192062377929688, 9.392023086547852, 7.97581672668457, 8.165121078491211, 9.660959243774414, 8.270978927612305, 9.901046752929688, 8.085966110229492, 10.581016540527344, 9.831905364990234, 7.883787155151367, 8.077859878540039, 8.161067962646484, 10.02812385559082, 7.9898834228515625, 9.840965270996094, 9.452104568481445, 9.747028350830078, 9.003162384033203, 9.206056594848633, 9.274005889892578, 7.8449249267578125, 8.827924728393555, 9.322881698608398, 12.08186149597168, 8.76307487487793, 8.353948593139648, 8.182048797607422, 7.736921310424805, 9.31406021118164, 9.263992309570312, 9.282112121582031, 7.823944091796875, 9.11712646484375, 8.099079132080078, 9.156942367553711, 8.363962173461914, 10.974884033203125, 8.729934692382812, 9.2620849609375, 9.27591323852539, 8.272886276245117, 8.25190544128418, 8.093118667602539, 9.259939193725586, 8.413076400756836, 8.198976516723633, 9.95182991027832, 8.024930953979492, 8.895158767700195, 8.243083953857422, 9.076833724975586, 9.994029998779297, 10.149955749511719]
-
- Median time (milliseconds):
- 9.26303863525
- ~~~
-
- Before partitioning, this query took a median time of 116.86ms. After partitioning, the query took a median time of only 9.26ms.
diff --git a/src/current/_includes/v22.1/performance/test-performance.md b/src/current/_includes/v22.1/performance/test-performance.md
deleted file mode 100644
index 018dbd902ab..00000000000
--- a/src/current/_includes/v22.1/performance/test-performance.md
+++ /dev/null
@@ -1,146 +0,0 @@
-In general, all of the tuning techniques featured in the single-region scenario above still apply in a multi-region deployment. However, the fact that data and leaseholders are spread across the US means greater latencies in many cases.
-
-#### Reads
-
-For example, imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use:
-
-1. SSH to the instance in `us-east1-b` with the Python client.
-
-2. Query for the data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ {{page.app}} \
- --host= \
- --statement="SELECT id, ext FROM vehicles \
- WHERE city = 'new york' \
- AND type = 'bike' \
- AND status = 'in_use'" \
- --repeat=50 \
- --times
- ~~~
-
- ~~~
- Result:
- ['id', 'ext']
- ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"]
- ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"]
- ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"]
- ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"]
- ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"]
- ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"]
- ...
-
- Times (milliseconds):
- [933.8209629058838, 72.02410697937012, 72.45206832885742, 72.39294052124023, 72.8158950805664, 72.07584381103516, 72.21412658691406, 71.96712493896484, 71.75517082214355, 72.16811180114746, 71.78592681884766, 72.91603088378906, 71.91109657287598, 71.4719295501709, 72.40676879882812, 71.8080997467041, 71.84004783630371, 71.98500633239746, 72.40891456604004, 73.75001907348633, 71.45905494689941, 71.53081893920898, 71.46596908569336, 72.07608222961426, 71.94995880126953, 71.41804695129395, 71.29096984863281, 72.11899757385254, 71.63381576538086, 71.3050365447998, 71.83194160461426, 71.20394706726074, 70.9981918334961, 72.79205322265625, 72.63493537902832, 72.15285301208496, 71.8698501586914, 72.30591773986816, 71.53582572937012, 72.69001007080078, 72.03006744384766, 72.56317138671875, 71.61688804626465, 72.17121124267578, 70.20092010498047, 72.12018966674805, 73.34589958190918, 73.01592826843262, 71.49410247802734, 72.19099998474121]
-
- Median time (milliseconds):
- 72.0270872116
- ~~~
-
-As we saw earlier, the leaseholder for the `vehicles` table is in `us-west2-a` (Los Angeles), so our query had to go from the gateway node in `us-east1-b` all the way to the west coast and then back again before returning data to the client.
-
-For contrast, imagine we are now a Movr administrator in Los Angeles, and we want to get the IDs and descriptions of all Los Angeles-based bikes that are currently in use:
-
-1. SSH to the instance in `us-west2-a` with the Python client.
-
-2. Query for the data:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ {{page.app}} \
- --host= \
- --statement="SELECT id, ext FROM vehicles \
- WHERE city = 'los angeles' \
- AND type = 'bike' \
- AND status = 'in_use'" \
- --repeat=50 \
- --times
- ~~~
-
- ~~~
- Result:
- ['id', 'ext']
- ['00078349-94d4-43e6-92be-8b0d1ac7ee9f', "{u'color': u'blue', u'brand': u'Merida'}"]
- ['003f84c4-fa14-47b2-92d4-35a3dddd2d75', "{u'color': u'red', u'brand': u'Kona'}"]
- ['0107a133-7762-4392-b1d9-496eb30ee5f9', "{u'color': u'yellow', u'brand': u'Kona'}"]
- ['0144498b-4c4f-4036-8465-93a6bea502a3', "{u'color': u'blue', u'brand': u'Pinarello'}"]
- ['01476004-fb10-4201-9e56-aadeb427f98a', "{u'color': u'black', u'brand': u'Merida'}"]
-
- Times (milliseconds):
- [782.6759815216064, 8.564949035644531, 8.226156234741211, 7.949113845825195, 7.86590576171875, 7.842063903808594, 7.674932479858398, 7.555961608886719, 7.642984390258789, 8.024930953979492, 7.717132568359375, 8.46409797668457, 7.520914077758789, 7.6541900634765625, 7.458925247192383, 7.671833038330078, 7.740020751953125, 7.771015167236328, 7.598161697387695, 8.411169052124023, 7.408857345581055, 7.469892501831055, 7.524967193603516, 7.764101028442383, 7.750988006591797, 7.2460174560546875, 6.927967071533203, 7.822990417480469, 7.27391242980957, 7.730960845947266, 7.4710845947265625, 7.4310302734375, 7.33494758605957, 7.455110549926758, 7.021188735961914, 7.083892822265625, 7.812976837158203, 7.625102996826172, 7.447957992553711, 7.179021835327148, 7.504940032958984, 7.224082946777344, 7.257938385009766, 7.714986801147461, 7.4939727783203125, 7.6160430908203125, 7.578849792480469, 7.890939712524414, 7.546901702880859, 7.411956787109375]
-
- Median time (milliseconds):
- 7.6071023941
- ~~~
-
-Because the leaseholder for `vehicles` is in the same zone as the client request, this query took just 7.60ms compared to the similar query in New York that took 72.02ms.
-
-#### Writes
-
-The geographic distribution of data impacts write performance as well. For example, imagine 100 people in Seattle and 100 people in New York want to create new Movr accounts:
-
-1. SSH to the instance in `us-west1-a` with the Python client.
-
-2. Create 100 Seattle-based users:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [277.4538993835449, 50.12702941894531, 47.75214195251465, 48.13408851623535, 47.872066497802734, 48.65407943725586, 47.78695106506348, 49.14689064025879, 52.770137786865234, 49.00097846984863, 48.68602752685547, 47.387123107910156, 47.36208915710449, 47.6841926574707, 46.49209976196289, 47.06096649169922, 46.753883361816406, 46.304941177368164, 48.90894889831543, 48.63715171813965, 48.37393760681152, 49.23295974731445, 50.13418197631836, 48.310041427612305, 48.57516288757324, 47.62911796569824, 47.77693748474121, 47.505855560302734, 47.89996147155762, 49.79205131530762, 50.76479911804199, 50.21500587463379, 48.73299598693848, 47.55592346191406, 47.35088348388672, 46.7071533203125, 43.00808906555176, 43.1060791015625, 46.02813720703125, 47.91092872619629, 68.71294975280762, 49.241065979003906, 48.9039421081543, 47.82295227050781, 48.26998710632324, 47.631025314331055, 64.51892852783203, 48.12812805175781, 67.33417510986328, 48.603057861328125, 50.31013488769531, 51.02396011352539, 51.45716667175293, 50.85396766662598, 49.07512664794922, 47.49894142150879, 44.67201232910156, 43.827056884765625, 44.412851333618164, 46.69189453125, 49.55601692199707, 49.16882514953613, 49.88598823547363, 49.31306838989258, 46.875, 46.69594764709473, 48.31886291503906, 48.378944396972656, 49.0570068359375, 49.417972564697266, 48.22111129760742, 50.662994384765625, 50.58097839355469, 75.44088363647461, 51.05400085449219, 50.85110664367676, 48.187971115112305, 56.7781925201416, 42.47403144836426, 46.2191104888916, 53.96890640258789, 46.697139739990234, 48.99096488952637, 49.1330623626709, 46.34690284729004, 47.09315299987793, 46.39410972595215, 46.51689529418945, 47.58000373840332, 47.924041748046875, 48.426151275634766, 50.22597312927246, 50.1859188079834, 50.37498474121094, 49.861907958984375, 51.477909088134766, 73.09293746948242, 48.779964447021484, 45.13692855834961, 42.2968864440918]
-
- Median time (milliseconds):
- 48.4025478363
- ~~~
-
-3. SSH to the instance in `us-east1-b` with the Python client.
-
-4. Create 100 new NY-based users:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [131.05082511901855, 116.88899993896484, 115.15498161315918, 117.095947265625, 121.04082107543945, 115.8750057220459, 113.80696296691895, 113.05880546569824, 118.41201782226562, 125.30899047851562, 117.5389289855957, 115.23890495300293, 116.84799194335938, 120.0411319732666, 115.62800407409668, 115.08989334106445, 113.37089538574219, 115.15498161315918, 115.96989631652832, 133.1961154937744, 114.25995826721191, 118.09396743774414, 122.24102020263672, 116.14608764648438, 114.80998992919922, 131.9139003753662, 114.54391479492188, 115.15307426452637, 116.7759895324707, 135.10799407958984, 117.18511581420898, 120.15485763549805, 118.0570125579834, 114.52388763427734, 115.28396606445312, 130.00011444091797, 126.45292282104492, 142.69423484802246, 117.60401725769043, 134.08493995666504, 117.47002601623535, 115.75007438659668, 117.98381805419922, 115.83089828491211, 114.88890647888184, 113.23404312133789, 121.1700439453125, 117.84791946411133, 115.35286903381348, 115.0820255279541, 116.99700355529785, 116.67394638061523, 116.1041259765625, 114.67289924621582, 112.98894882202148, 117.1119213104248, 119.78602409362793, 114.57300186157227, 129.58717346191406, 118.37983131408691, 126.68204307556152, 118.30306053161621, 113.27195167541504, 114.22920227050781, 115.80777168273926, 116.81294441223145, 114.76683616638184, 115.1430606842041, 117.29192733764648, 118.24417114257812, 116.56999588012695, 113.8620376586914, 114.88819122314453, 120.80597877502441, 132.39002227783203, 131.00910186767578, 114.56179618835449, 117.03896522521973, 117.72680282592773, 115.6010627746582, 115.27681350708008, 114.52317237854004, 114.87483978271484, 117.78903007507324, 116.65701866149902, 122.6949691772461, 117.65193939208984, 120.5449104309082, 115.61179161071777, 117.54202842712402, 114.70890045166016, 113.58809471130371, 129.7171115875244, 117.57993698120117, 117.1119213104248, 117.64001846313477, 140.66505432128906, 136.41691207885742, 116.24789237976074, 115.19908905029297]
-
- Median time (milliseconds):
- 116.868495941
- ~~~
-
-It took 48.40ms to create a user in Seattle and 116.86ms to create a user in New York. To better understand this discrepancy, let's look at the distribution of data for the `users` table:
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
-{{page.certs}} \
---host= \
---database=movr \
---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE users;"
-~~~
-
-~~~
- start_key | end_key | range_id | replicas | lease_holder
-+-----------+---------+----------+----------+--------------+
- NULL | NULL | 49 | {2,6,8} | 6
-(1 row)
-~~~
-
-For the single range containing `users` data, one replica is in each zone, with the leaseholder in the `us-west1-a` zone. This means that:
-
-- When creating a user in Seattle, the request doesn't have to leave the zone to reach the leaseholder. However, since a write requires consensus from its replica group, the write has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client.
-- When creating a user in New York, there are more network hops and, thus, increased latency. The request first needs to travel across the continent to the leaseholder in `us-west1-a`. It then has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client back in the east.
diff --git a/src/current/_includes/v22.1/performance/tuning-secure.py b/src/current/_includes/v22.1/performance/tuning-secure.py
deleted file mode 100644
index a644dbb1c87..00000000000
--- a/src/current/_includes/v22.1/performance/tuning-secure.py
+++ /dev/null
@@ -1,77 +0,0 @@
-#!/usr/bin/env python
-
-import argparse
-import psycopg2
-import time
-
-parser = argparse.ArgumentParser(
- description="test performance of statements against movr database")
-parser.add_argument("--host", required=True,
- help="ip address of one of the CockroachDB nodes")
-parser.add_argument("--statement", required=True,
- help="statement to execute")
-parser.add_argument("--repeat", type=int,
- help="number of times to repeat the statement", default = 20)
-parser.add_argument("--times",
- help="print time for each repetition of the statement", action="store_true")
-parser.add_argument("--cumulative",
- help="print cumulative time for all repetitions of the statement", action="store_true")
-args = parser.parse_args()
-
-conn = psycopg2.connect(
- database='movr',
- user='root',
- host=args.host,
- port=26257,
- sslmode='require',
- sslrootcert='certs/ca.crt',
- sslkey='certs/client.root.key',
- sslcert='certs/client.root.crt'
-)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-def median(lst):
- n = len(lst)
- if n < 1:
- return None
- if n % 2 == 1:
- return sorted(lst)[n//2]
- else:
- return sum(sorted(lst)[n//2-1:n//2+1])/2.0
-
-times = list()
-for n in range(args.repeat):
- start = time.time()
- statement = args.statement
- cur.execute(statement)
- if n < 1:
- if cur.description is not None:
- colnames = [desc[0] for desc in cur.description]
- print("")
- print("Result:")
- print(colnames)
- rows = cur.fetchall()
- for row in rows:
- print([str(cell) for cell in row])
- end = time.time()
- times.append((end - start)* 1000)
-
-cur.close()
-conn.close()
-
-print("")
-if args.times:
- print("Times (milliseconds):")
- print(times)
- print("")
-# print("Average time (milliseconds):")
-# print(float(sum(times))/len(times))
-# print("")
-print("Median time (milliseconds):")
-print(median(times))
-print("")
-if args.cumulative:
- print("Cumulative time (milliseconds):")
- print(sum(times))
- print("")
diff --git a/src/current/_includes/v22.1/performance/tuning.py b/src/current/_includes/v22.1/performance/tuning.py
deleted file mode 100644
index dcb567dad91..00000000000
--- a/src/current/_includes/v22.1/performance/tuning.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/usr/bin/env python
-
-import argparse
-import psycopg2
-import time
-
-parser = argparse.ArgumentParser(
- description="test performance of statements against movr database")
-parser.add_argument("--host", required=True,
- help="ip address of one of the CockroachDB nodes")
-parser.add_argument("--statement", required=True,
- help="statement to execute")
-parser.add_argument("--repeat", type=int,
- help="number of times to repeat the statement", default = 20)
-parser.add_argument("--times",
- help="print time for each repetition of the statement", action="store_true")
-parser.add_argument("--cumulative",
- help="print cumulative time for all repetitions of the statement", action="store_true")
-args = parser.parse_args()
-
-conn = psycopg2.connect(
- database='movr',
- user='root',
- host=args.host,
- port=26257
-)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-def median(lst):
- n = len(lst)
- if n < 1:
- return None
- if n % 2 == 1:
- return sorted(lst)[n//2]
- else:
- return sum(sorted(lst)[n//2-1:n//2+1])/2.0
-
-times = list()
-for n in range(args.repeat):
- start = time.time()
- statement = args.statement
- cur.execute(statement)
- if n < 1:
- if cur.description is not None:
- colnames = [desc[0] for desc in cur.description]
- print("")
- print("Result:")
- print(colnames)
- rows = cur.fetchall()
- for row in rows:
- print([str(cell) for cell in row])
- end = time.time()
- times.append((end - start)* 1000)
-
-cur.close()
-conn.close()
-
-print("")
-if args.times:
- print("Times (milliseconds):")
- print(times)
- print("")
-# print("Average time (milliseconds):")
-# print(float(sum(times))/len(times))
-# print("")
-print("Median time (milliseconds):")
-print(median(times))
-print("")
-if args.cumulative:
- print("Cumulative time (milliseconds):")
- print(sum(times))
- print("")
diff --git a/src/current/_includes/v22.1/performance/use-hash-sharded-indexes.md b/src/current/_includes/v22.1/performance/use-hash-sharded-indexes.md
deleted file mode 100644
index 715b378c9bb..00000000000
--- a/src/current/_includes/v22.1/performance/use-hash-sharded-indexes.md
+++ /dev/null
@@ -1 +0,0 @@
-For performance reasons, we discourage [indexing on sequential keys](indexes.html). If, however, you are working with a table that must be indexed on sequential keys, you should use [hash-sharded indexes](hash-sharded-indexes.html). Hash-sharded indexes distribute sequential traffic uniformly across ranges, eliminating single-range hot spots and improving write performance on sequentially-keyed indexes at a small cost to read performance.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/advertise-addr-join.md b/src/current/_includes/v22.1/prod-deployment/advertise-addr-join.md
deleted file mode 100644
index 67019d1fcea..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/advertise-addr-join.md
+++ /dev/null
@@ -1,4 +0,0 @@
-Flag | Description
------|------------
-`--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking).
-`--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
diff --git a/src/current/_includes/v22.1/prod-deployment/aws-inbound-rules.md b/src/current/_includes/v22.1/prod-deployment/aws-inbound-rules.md
deleted file mode 100644
index 8be748205a6..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/aws-inbound-rules.md
+++ /dev/null
@@ -1,31 +0,0 @@
-#### Inter-node and load balancer-node communication
-
- Field | Value
--------|-------------------
- Port Range | **26257**
- Source | The ID of your security group (e.g., *sg-07ab277a*)
-
-#### Application data
-
- Field | Value
--------|-------------------
- Port Range | **26257**
- Source | Your application's IP ranges
-
-#### DB Console
-
- Field | Value
--------|-------------------
- Port Range | **8080**
- Source | Your network's IP ranges
-
-You can set your network IP by selecting "My IP" in the Source field.
-
-#### Load balancer-health check communication
-
- Field | Value
--------|-------------------
- Port Range | **8080**
- Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16)
-
- To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/backup.sh b/src/current/_includes/v22.1/prod-deployment/backup.sh
deleted file mode 100644
index efcbd4c7041..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/backup.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/bin/bash
-
-set -euo pipefail
-
-# This script creates full backups when run on the configured
-# day of the week and incremental backups when run on other days, and tracks
-# recently created backups in a file to pass as the base for incremental backups.
-
-what="" # Leave empty for cluster backup, or add "DATABASE database_name" to backup a database.
-base="/backups" # The URL where you want to store the backup.
-extra="" # Any additional parameters that need to be appended to the BACKUP URI e.g., AWS key params.
-recent=recent_backups.txt # File in which recent backups are tracked.
-backup_parameters= # e.g., "WITH revision_history"
-
-# Customize the `cockroach sql` command with `--host`, `--certs-dir` or `--insecure`, `--port`, and additional flags as needed to connect to the SQL client.
-runsql() { cockroach sql --insecure -e "$1"; }
-
-destination="${base}/$(date +"%Y-%V")${extra}" # %V is the week number of the year, with Monday as the first day of the week.
-
-runsql "BACKUP $what TO '$destination' AS OF SYSTEM TIME '-1m' $backup_parameters"
-echo "backed up to ${destination}"
diff --git a/src/current/_includes/v22.1/prod-deployment/check-sql-query-performance.md b/src/current/_includes/v22.1/prod-deployment/check-sql-query-performance.md
deleted file mode 100644
index 1abfcc52778..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/check-sql-query-performance.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-If you aren't sure whether SQL query performance needs to be improved on your cluster, see [Identify slow statements](query-behavior-troubleshooting.html#identify-slow-queries).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/prod-deployment/cloud-report.md b/src/current/_includes/v22.1/prod-deployment/cloud-report.md
deleted file mode 100644
index aa2a765af6e..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/cloud-report.md
+++ /dev/null
@@ -1 +0,0 @@
-Cockroach Labs creates a yearly cloud report focused on evaluating hardware performance. For more information, see the [2022 Cloud Report](https://www.cockroachlabs.com/guides/2022-cloud-report/).
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/cluster-unavailable-monitoring.md b/src/current/_includes/v22.1/prod-deployment/cluster-unavailable-monitoring.md
deleted file mode 100644
index d4d8803ca1f..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/cluster-unavailable-monitoring.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-If the cluster becomes unavailable, the DB Console and Cluster API will also become unavailable. You can continue to monitor the cluster via the [Prometheus endpoint](monitoring-and-alerting.html#prometheus-endpoint) and [logs](logging-overview.html).
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-command-commit-latency.md b/src/current/_includes/v22.1/prod-deployment/healthy-command-commit-latency.md
deleted file mode 100644
index d055f37aded..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/healthy-command-commit-latency.md
+++ /dev/null
@@ -1 +0,0 @@
-**Expected values for a healthy cluster**: On SSDs ([strongly recommended](recommended-production-settings.html#storage)), this should be between 1 and 100 milliseconds. On HDDs, this should be no more than 1 second.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-cpu-percent.md b/src/current/_includes/v22.1/prod-deployment/healthy-cpu-percent.md
deleted file mode 100644
index a58b0b87973..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/healthy-cpu-percent.md
+++ /dev/null
@@ -1 +0,0 @@
-**Expected values for a healthy cluster**: CPU utilized by CockroachDB should not persistently exceed 80%. Because this metric does not reflect system CPU usage, values above 80% suggest that actual CPU utilization is nearing 100%.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-crdb-memory.md b/src/current/_includes/v22.1/prod-deployment/healthy-crdb-memory.md
deleted file mode 100644
index a0994e08eed..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/healthy-crdb-memory.md
+++ /dev/null
@@ -1 +0,0 @@
-**Expected values for a healthy cluster**: RSS minus Go Total and CGo Total should not exceed 100 MiB. Go Allocated should not exceed a few hundred MiB. CGo Allocated should not exceed the `--cache` size.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-disk-ops-in-progress.md b/src/current/_includes/v22.1/prod-deployment/healthy-disk-ops-in-progress.md
deleted file mode 100644
index e80714df120..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/healthy-disk-ops-in-progress.md
+++ /dev/null
@@ -1 +0,0 @@
-**Expected values for a healthy cluster**: This value should be 0 or single-digit values for short periods of time. If the values persist in double digits, you may have an I/O bottleneck.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-lsm.md b/src/current/_includes/v22.1/prod-deployment/healthy-lsm.md
deleted file mode 100644
index 31fd320af2a..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/healthy-lsm.md
+++ /dev/null
@@ -1 +0,0 @@
-**Expected values for a healthy cluster**: The number of L0 files should **not** be in the high thousands. High values indicate heavy write load that is causing accumulation of files in level 0. These files are not being compacted quickly enough to lower levels, resulting in a misshapen LSM.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-node-heartbeat-latency.md b/src/current/_includes/v22.1/prod-deployment/healthy-node-heartbeat-latency.md
deleted file mode 100644
index ed58182c98f..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/healthy-node-heartbeat-latency.md
+++ /dev/null
@@ -1 +0,0 @@
-**Expected values for a healthy cluster**: Less than 100ms in addition to the [network latency](ui-network-latency-page.html) between nodes in the cluster.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-read-amplification.md b/src/current/_includes/v22.1/prod-deployment/healthy-read-amplification.md
deleted file mode 100644
index c7ffe9c6d17..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/healthy-read-amplification.md
+++ /dev/null
@@ -1 +0,0 @@
-**Expected values for a healthy cluster**: Read amplification factor should be in the single digits. A value exceeding 50 for 1 hour strongly suggests that the LSM tree has an unhealthy shape.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-sql-memory.md b/src/current/_includes/v22.1/prod-deployment/healthy-sql-memory.md
deleted file mode 100644
index 0b963ed55b3..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/healthy-sql-memory.md
+++ /dev/null
@@ -1 +0,0 @@
-**Expected values for a healthy cluster**: This value should not exceed the [`--max-sql-memory`](recommended-production-settings.html#cache-and-sql-memory-size) size. A healthy threshold is 75% of allocated `--max-sql-memory`.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-storage-capacity.md b/src/current/_includes/v22.1/prod-deployment/healthy-storage-capacity.md
deleted file mode 100644
index af6253c932d..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/healthy-storage-capacity.md
+++ /dev/null
@@ -1 +0,0 @@
-**Expected values for a healthy cluster**: Used capacity should not persistently exceed 80% of the total capacity.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/healthy-workload-concurrency.md b/src/current/_includes/v22.1/prod-deployment/healthy-workload-concurrency.md
deleted file mode 100644
index 6e8d4891339..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/healthy-workload-concurrency.md
+++ /dev/null
@@ -1 +0,0 @@
-**Expected values for a healthy cluster**: At any time, the total number of actively executing SQL statements should not exceed 4 times the number of vCPUs in the cluster. For more details, see [Sizing connection pools](connection-pooling.html#sizing-connection-pools).
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-flag.md b/src/current/_includes/v22.1/prod-deployment/insecure-flag.md
deleted file mode 100644
index a13951ba4bc..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/insecure-flag.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-The `--insecure` flag used in this tutorial is intended for non-production testing only. To run CockroachDB in production, use a secure cluster instead.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-initialize-cluster.md b/src/current/_includes/v22.1/prod-deployment/insecure-initialize-cluster.md
deleted file mode 100644
index 1bf99ee27c0..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/insecure-initialize-cluster.md
+++ /dev/null
@@ -1,12 +0,0 @@
-On your local machine, complete the node startup process and have them join together as a cluster:
-
-1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already.
-
-2. Run the [`cockroach init`](cockroach-init.html) command, with the `--host` flag set to the address of any node:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=
- ~~~
-
- Each node then prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients.
diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-recommendations.md b/src/current/_includes/v22.1/prod-deployment/insecure-recommendations.md
deleted file mode 100644
index e27b3489865..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/insecure-recommendations.md
+++ /dev/null
@@ -1,13 +0,0 @@
-- Consider using a [secure cluster](manual-deployment.html) instead. Using an insecure cluster comes with risks:
- - Your cluster is open to any client that can access any node's IP addresses.
- - Any user, even `root`, can log in without providing a password.
- - Any user, connecting as `root`, can read or write any data in your cluster.
- - There is no network encryption or authentication, and thus no confidentiality.
-
-- Decide how you want to access your DB Console:
-
- Access Level | Description
- -------------|------------
- Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`.
- Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`.
- Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the DB Console.
diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-requirements.md b/src/current/_includes/v22.1/prod-deployment/insecure-requirements.md
deleted file mode 100644
index fb2faee26e8..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/insecure-requirements.md
+++ /dev/null
@@ -1,9 +0,0 @@
-- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries.
-
-- Your network configuration must allow TCP communication on the following ports:
- - `26257` for intra-cluster and client-cluster communication
- - `8080` to expose your DB Console
-
-- Carefully review the [Production Checklist](recommended-production-settings.html) and recommended [Topology Patterns](topology-patterns.html).
-
-{% include {{ page.version.version }}/prod-deployment/topology-recommendations.md %}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-scale-cluster.md b/src/current/_includes/v22.1/prod-deployment/insecure-scale-cluster.md
deleted file mode 100644
index 335463e6db3..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/insecure-scale-cluster.md
+++ /dev/null
@@ -1,121 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Run the [`cockroach start`](cockroach-start.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier).
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --advertise-addr= \
- --join=,, \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-5. Update your load balancer to recognize the new node.
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Create the Cockroach directory:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-5. Create a Unix user named `cockroach`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-6. Change the ownership of the `cockroach` directory to the user `cockroach`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ chown cockroach /var/lib/cockroach
- ~~~
-
-7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %}
- ~~~
-
- {{site.data.alerts.callout_info}}
- Previously, the sample configuration file set `TimeoutStopSec` to 60 seconds. This recommendation has been lengthened to 300 seconds, to give the `cockroach` process more time to stop gracefully.
- {{site.data.alerts.end}}
-
- Save the file in the `/etc/systemd/system/` directory
-
-8. Customize the sample configuration template for your deployment:
-
- Specify values for the following flags in the sample configuration template:
-
- {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
-
-9. Repeat these steps for each additional node that you want in your cluster.
-
-
diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-start-nodes.md b/src/current/_includes/v22.1/prod-deployment/insecure-start-nodes.md
deleted file mode 100644
index 1a5f95e2b24..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/insecure-start-nodes.md
+++ /dev/null
@@ -1,192 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}
-After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
-{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir -p /usr/local/lib/cockroach
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-5. Run the [`cockroach start`](cockroach-start.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --advertise-addr= \
- --join=,, \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
- This command primes the node to start, using the following flags:
-
- Flag | Description
- -----|------------
- `--insecure` | Indicates that the cluster is insecure, with no network encryption or authentication.
- `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking).
- `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
- `--cache` `--max-sql-memory` | Increases the node's cache size to 25% of available system memory to improve read performance. The capacity for in-memory SQL processing defaults to 25% of system memory but can be raised, if necessary, to increase the number of simultaneous client connections allowed by the node as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size).
- `--background` | Starts the node in the background so you gain control of the terminal to issue more commands.
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](cockroach-start.html).
-
-6. Repeat these steps for each additional node that you want in your cluster.
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}
-After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
-{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir -p /usr/local/lib/cockroach
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-5. Create the Cockroach directory:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-6. Create a Unix user named `cockroach`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-7. Change the ownership of the `cockroach` directory to the user `cockroach`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ chown cockroach /var/lib/cockroach
- ~~~
-
-8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service) and save the file in the `/etc/systemd/system/` directory:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %}
- ~~~
-
- {{site.data.alerts.callout_info}}
- Previously, the sample configuration file set `TimeoutStopSec` to 60 seconds. This recommendation has been lengthened to 300 seconds, to give the `cockroach` process more time to stop gracefully.
- {{site.data.alerts.end}}
-
-9. In the sample configuration template, specify values for the following flags:
-
- {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](cockroach-start.html).
-
-10. Start the CockroachDB cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ systemctl start insecurecockroachdb
- ~~~
-
-11. Repeat these steps for each additional node that you want in your cluster.
-
-{{site.data.alerts.callout_info}}
-`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb`
-{{site.data.alerts.end}}
-
-
diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-test-cluster.md b/src/current/_includes/v22.1/prod-deployment/insecure-test-cluster.md
deleted file mode 100644
index 9f1d66fad3b..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/insecure-test-cluster.md
+++ /dev/null
@@ -1,41 +0,0 @@
-CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway.
-
-When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes.
-
-Use the [built-in SQL client](cockroach-sql.html) locally as follows:
-
-1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure --host=
- ~~~
-
-2. Create an `insecurenodetest` database:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE insecurenodetest;
- ~~~
-
-3. View the cluster's databases, which will include `insecurenodetest`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SHOW DATABASES;
- ~~~
-
- ~~~
- +--------------------+
- | Database |
- +--------------------+
- | crdb_internal |
- | information_schema |
- | insecurenodetest |
- | pg_catalog |
- | system |
- +--------------------+
- (5 rows)
- ~~~
-
-4. Use `\q` to exit the SQL shell.
diff --git a/src/current/_includes/v22.1/prod-deployment/insecure-test-load-balancing.md b/src/current/_includes/v22.1/prod-deployment/insecure-test-load-balancing.md
deleted file mode 100644
index ae47b5cd160..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/insecure-test-load-balancing.md
+++ /dev/null
@@ -1,79 +0,0 @@
-CockroachDB comes with a number of [built-in workloads](cockroach-workload.html) for simulating client traffic. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload.
-
-{{site.data.alerts.callout_info}}
-Be sure that you have configured your network to allow traffic from the application to the load balancer. In this case, you will run the sample workload on one of your machines. The traffic source should therefore be the **internal (private)** IP address of that machine.
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_success}}
-For comprehensive guidance on benchmarking CockroachDB with TPC-C, see [Performance Benchmarking](performance-benchmarking-with-tpcc-local.html).
-{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the run the sample TPC-C workload.
-
- This should be a machine that is not running a CockroachDB node.
-
-1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-1. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-1. Use the [`cockroach workload`](cockroach-workload.html) command to load the initial schema and data, pointing it at the IP address of the load balancer:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach workload init tpcc \
- 'postgresql://root@:26257/tpcc?sslmode=disable'
- ~~~
-
-1. Use the `cockroach workload` command to run the workload for 10 minutes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach workload run tpcc \
- --duration=10m \
- 'postgresql://root@:26257/tpcc?sslmode=disable'
- ~~~
-
- You'll see per-operation statistics print to standard output every second:
-
- ~~~
- _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
- 1s 0 1443.4 1494.8 4.7 9.4 27.3 67.1 transfer
- 2s 0 1686.5 1590.9 4.7 8.1 15.2 28.3 transfer
- 3s 0 1735.7 1639.0 4.7 7.3 11.5 28.3 transfer
- 4s 0 1542.6 1614.9 5.0 8.9 12.1 21.0 transfer
- 5s 0 1695.9 1631.1 4.7 7.3 11.5 22.0 transfer
- 6s 0 1569.2 1620.8 5.0 8.4 11.5 15.7 transfer
- 7s 0 1614.6 1619.9 4.7 8.1 12.1 16.8 transfer
- 8s 0 1344.4 1585.6 5.8 10.0 15.2 31.5 transfer
- 9s 0 1351.9 1559.5 5.8 10.0 16.8 54.5 transfer
- 10s 0 1514.8 1555.0 5.2 8.1 12.1 16.8 transfer
- ...
- ~~~
-
- After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output:
-
- ~~~
- _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result
- 600.0s 0 823902 1373.2 5.8 5.5 10.0 15.2 209.7
- ~~~
-
- {{site.data.alerts.callout_success}}
- For more `tpcc` options, use `cockroach workload run tpcc --help`. For details about other workloads built into the `cockroach` binary, use `cockroach workload --help`.
- {{site.data.alerts.end}}
-
-1. To monitor the load generator's progress, open the [DB Console](ui-overview.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup.
-
- Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes.
diff --git a/src/current/_includes/v22.1/prod-deployment/insecurecockroachdb.service b/src/current/_includes/v22.1/prod-deployment/insecurecockroachdb.service
deleted file mode 100644
index 54d5ea2047a..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/insecurecockroachdb.service
+++ /dev/null
@@ -1,16 +0,0 @@
-[Unit]
-Description=Cockroach Database cluster node
-Requires=network.target
-[Service]
-Type=notify
-WorkingDirectory=/var/lib/cockroach
-ExecStart=/usr/local/bin/cockroach start --insecure --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25
-TimeoutStopSec=300
-Restart=always
-RestartSec=10
-StandardOutput=syslog
-StandardError=syslog
-SyslogIdentifier=cockroach
-User=cockroach
-[Install]
-WantedBy=default.target
diff --git a/src/current/_includes/v22.1/prod-deployment/join-flag-multi-region.md b/src/current/_includes/v22.1/prod-deployment/join-flag-multi-region.md
deleted file mode 100644
index 93ae34a8716..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/join-flag-multi-region.md
+++ /dev/null
@@ -1 +0,0 @@
-When starting a multi-region cluster, set more than one `--join` address per region, and select nodes that are spread across failure domains. This ensures [high availability](architecture/replication-layer.html#overview).
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/join-flag-single-region.md b/src/current/_includes/v22.1/prod-deployment/join-flag-single-region.md
deleted file mode 100644
index 99250cdfee9..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/join-flag-single-region.md
+++ /dev/null
@@ -1 +0,0 @@
-For a cluster in a single region, set 3-5 `--join` addresses. Each starting node will attempt to contact one of the join hosts. In case a join host cannot be reached, the node will try another address on the list until it can join the gossip network.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/monitor-cluster.md b/src/current/_includes/v22.1/prod-deployment/monitor-cluster.md
deleted file mode 100644
index 363ef1167c1..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/monitor-cluster.md
+++ /dev/null
@@ -1,3 +0,0 @@
-Despite CockroachDB's various [built-in safeguards against failure](frequently-asked-questions.html#how-does-cockroachdb-survive-failures), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.
-
-For details about available monitoring options and the most important events and metrics to alert on, see [Monitoring and Alerting](monitoring-and-alerting.html).
diff --git a/src/current/_includes/v22.1/prod-deployment/process-termination.md b/src/current/_includes/v22.1/prod-deployment/process-termination.md
deleted file mode 100644
index 23f9310572b..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/process-termination.md
+++ /dev/null
@@ -1,13 +0,0 @@
-{{site.data.alerts.callout_danger}}
-We do not recommend sending `SIGKILL` to perform a "hard" shutdown, which bypasses CockroachDB's [node shutdown logic](#node-shutdown-sequence) and forcibly terminates the process. This can corrupt log files and, in certain edge cases, can result in temporary data unavailability, latency spikes, uncertainty errors, ambiguous commit errors, or query timeouts. When decommissioning, a hard shutdown will leave ranges under-replicated and vulnerable to another node failure, causing [quorum](architecture/replication-layer.html#overview) loss in the window before up-replication completes.
-{{site.data.alerts.end}}
-
-- On production deployments, use the process manager to send `SIGTERM` to the process.
-
- - For example, with [`systemd`](https://www.freedesktop.org/wiki/Software/systemd/), run `systemctl stop {systemd config filename}`.
-
-- When using CockroachDB for local testing:
-
- - When running a server on the foreground, use `ctrl-c` in the terminal to send `SIGINT` to the process.
-
- - When running with the [`--background` flag](cockroach-start.html#general), use `pkill`, `kill`, or look up the process ID with `ps -ef | grep cockroach | grep -v grep` and then run `kill -TERM {process ID}`.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-cache-max-sql-memory.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-cache-max-sql-memory.md
deleted file mode 100644
index 0a6b979c581..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-cache-max-sql-memory.md
+++ /dev/null
@@ -1 +0,0 @@
-For production deployments, set `--cache` to `25%` or higher. Avoid setting `--cache` and `--max-sql-memory` to a combined value of more than 75% of a machine's total RAM. Doing so increases the risk of memory-related failures.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-connection-pooling.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-connection-pooling.md
deleted file mode 100644
index 17b87a9988b..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-connection-pooling.md
+++ /dev/null
@@ -1 +0,0 @@
-The total number of workload connections across all connection pools **should not exceed 4 times the number of vCPUs** in the cluster by a large amount.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-disable-swap.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-disable-swap.md
deleted file mode 100644
index f988eb016d4..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-disable-swap.md
+++ /dev/null
@@ -1 +0,0 @@
-Disable Linux memory swapping. Over-allocating memory on production machines can lead to unexpected performance issues when pages have to be read back into memory.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-larger-nodes.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-larger-nodes.md
deleted file mode 100644
index c165a0130b7..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-larger-nodes.md
+++ /dev/null
@@ -1 +0,0 @@
-To optimize for throughput, use larger nodes with up to 32 vCPUs. To further increase throughput, add more nodes to the cluster instead of increasing node size.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-log-volume.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-log-volume.md
deleted file mode 100644
index 7cc1a26ece7..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-log-volume.md
+++ /dev/null
@@ -1 +0,0 @@
-Store CockroachDB [log files](configure-logs.html#logging-directory) in a separate volume from the main data store so that logging is not impacted by I/O throttling.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-lvm.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-lvm.md
deleted file mode 100644
index c1cd5885f1e..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-lvm.md
+++ /dev/null
@@ -1 +0,0 @@
-Do not use LVM in the I/O path. Dynamically resizing CockroachDB store volumes can result in significant performance degradation. Using LVM snapshots in lieu of CockroachDB [backup and restore](take-full-and-incremental-backups.html) is also not supported.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/prod-guidance-store-volume.md b/src/current/_includes/v22.1/prod-deployment/prod-guidance-store-volume.md
deleted file mode 100644
index c957422ce07..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/prod-guidance-store-volume.md
+++ /dev/null
@@ -1 +0,0 @@
-Use dedicated volumes for the CockroachDB [store](cockroach-start.html#store). Do not share the store volume with any other I/O activity.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/prod-see-also.md b/src/current/_includes/v22.1/prod-deployment/prod-see-also.md
deleted file mode 100644
index 42ec5cd32c0..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/prod-see-also.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- [Production Checklist](recommended-production-settings.html)
-- [Manual Deployment](manual-deployment.html)
-- [Orchestrated Deployment](kubernetes-overview.html)
-- [Monitoring and Alerting](monitoring-and-alerting.html)
-- [Performance Benchmarking](performance-benchmarking-with-tpcc-small.html)
-- [Performance Tuning](performance-best-practices-overview.html)
-- [Local Deployment](start-a-local-cluster.html)
diff --git a/src/current/_includes/v22.1/prod-deployment/provision-cpu.md b/src/current/_includes/v22.1/prod-deployment/provision-cpu.md
deleted file mode 100644
index 48896a432cd..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/provision-cpu.md
+++ /dev/null
@@ -1 +0,0 @@
-{% if include.threshold == "absolute_minimum" %}**4 vCPUs**{% elsif include.threshold == "minimum" %}**8 vCPUs**{% elsif include.threshold == "maximum" %}**32 vCPUs**{% endif %}
diff --git a/src/current/_includes/v22.1/prod-deployment/provision-disk-io.md b/src/current/_includes/v22.1/prod-deployment/provision-disk-io.md
deleted file mode 100644
index dadd7113e01..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/provision-disk-io.md
+++ /dev/null
@@ -1 +0,0 @@
-500 IOPS and 30 MB/s per vCPU
diff --git a/src/current/_includes/v22.1/prod-deployment/provision-memory.md b/src/current/_includes/v22.1/prod-deployment/provision-memory.md
deleted file mode 100644
index 98136337374..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/provision-memory.md
+++ /dev/null
@@ -1 +0,0 @@
-**4 GiB of RAM per vCPU**
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/provision-storage.md b/src/current/_includes/v22.1/prod-deployment/provision-storage.md
deleted file mode 100644
index 89b4210fc4f..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/provision-storage.md
+++ /dev/null
@@ -1 +0,0 @@
-320 GiB per vCPU
diff --git a/src/current/_includes/v22.1/prod-deployment/recommended-instances-aws.md b/src/current/_includes/v22.1/prod-deployment/recommended-instances-aws.md
deleted file mode 100644
index 87d0f53e95c..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/recommended-instances-aws.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- Use general-purpose [`m6i` or `m6a`](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/general-purpose-instances.html) VMs with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html). For example, Cockroach Labs has used `m6i.2xlarge` for performance benchmarking. If your workload requires high throughput, use network-optimized `m5n` instances. To simulate bare-metal deployments, use `m5d` with [SSD Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html).
-
- - `m5` and `m5a` instances, and [compute-optimized `c5`, `c5a`, and `c5n`](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compute-optimized-instances.html) instances, are also acceptable.
-
- {{site.data.alerts.callout_danger}}
- **Do not** use [burstable performance instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html), which limit the load on a single core.
- {{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/prod-deployment/recommended-instances-azure.md b/src/current/_includes/v22.1/prod-deployment/recommended-instances-azure.md
deleted file mode 100644
index 74263dbe9d0..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/recommended-instances-azure.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- Use general-purpose [Dsv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/dv5-dsv5-series) and [Dasv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series) or memory-optimized [Ev5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/ev5-esv5-series) and [Easv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/easv5-eadsv5-series#easv5-series) VMs. For example, Cockroach Labs has used `Standard_D8s_v5`, `Standard_D8as_v5`, `Standard_E8s_v5`, and `Standard_e8as_v5` for performance benchmarking.
-
- - Compute-optimized [F-series](https://docs.microsoft.com/en-us/azure/virtual-machines/fsv2-series) VMs are also acceptable.
-
- {{site.data.alerts.callout_danger}}
- Do not use ["burstable" B-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/b-series-burstable) VMs, which limit the load on CPU resources. Also, Cockroach Labs has experienced data corruption issues on A-series VMs and irregular disk performance on D-series VMs, so we recommend avoiding those as well.
- {{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/recommended-instances-gcp.md b/src/current/_includes/v22.1/prod-deployment/recommended-instances-gcp.md
deleted file mode 100644
index 6dbe048cd16..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/recommended-instances-gcp.md
+++ /dev/null
@@ -1,5 +0,0 @@
-- Use general-purpose [`t2d-standard`, `n2-standard`, or `n2d-standard`](https://cloud.google.com/compute/pricing#predefined_machine_types) VMs, or use [custom VMs](https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type). For example, Cockroach Labs has used `t2d-standard-8`, `n2-standard-8`, and `n2d-standard-8` for performance benchmarking.
-
- {{site.data.alerts.callout_danger}}
- Do not use `f1` or `g1` [shared-core machines](https://cloud.google.com/compute/docs/machine-types#sharedcore), which limit the load on CPU resources.
- {{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/resolution-excessive-concurrency.md b/src/current/_includes/v22.1/prod-deployment/resolution-excessive-concurrency.md
deleted file mode 100644
index 8d776db1dba..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/resolution-excessive-concurrency.md
+++ /dev/null
@@ -1 +0,0 @@
-To prevent issues with workload concurrency, [provision sufficient CPU](recommended-production-settings.html#sizing) and use [connection pooling](recommended-production-settings.html#connection-pooling) for the workload.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/resolution-inverted-lsm.md b/src/current/_includes/v22.1/prod-deployment/resolution-inverted-lsm.md
deleted file mode 100644
index ac505cc6b68..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/resolution-inverted-lsm.md
+++ /dev/null
@@ -1 +0,0 @@
-If compaction has fallen behind and caused an [inverted LSM](architecture/storage-layer.html#inverted-lsms), throttle your workload concurrency to allow compaction to catch up and restore a healthy LSM shape. {% include {{ page.version.version }}/prod-deployment/prod-guidance-connection-pooling.md %} If a node is severely impacted, you can [start a new node](cockroach-start.html) and then [decommission the problematic node](node-shutdown.html?filters=decommission#remove-nodes).
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/resolution-oom-crash.md b/src/current/_includes/v22.1/prod-deployment/resolution-oom-crash.md
deleted file mode 100644
index b2c6c96e356..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/resolution-oom-crash.md
+++ /dev/null
@@ -1 +0,0 @@
-To prevent OOM crashes, [provision sufficient memory](recommended-production-settings.html#memory). If all CockroachDB machines are provisioned and configured correctly, either run the CockroachDB process on another node with sufficient memory, or [reduce the memory allocated to CockroachDB](recommended-production-settings.html#cache-and-sql-memory-size).
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/resolution-untuned-query.md b/src/current/_includes/v22.1/prod-deployment/resolution-untuned-query.md
deleted file mode 100644
index e81ff66a53b..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/resolution-untuned-query.md
+++ /dev/null
@@ -1 +0,0 @@
-If you find queries that are consuming too much memory, [cancel the queries](manage-long-running-queries.html#cancel-long-running-queries) to free up memory usage. For information on optimizing query performance, see [SQL Performance Best Practices](performance-best-practices-overview.html).
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/secure-generate-certificates.md b/src/current/_includes/v22.1/prod-deployment/secure-generate-certificates.md
deleted file mode 100644
index 9870de5b0cf..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/secure-generate-certificates.md
+++ /dev/null
@@ -1,201 +0,0 @@
-You can use `cockroach cert` commands or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates. This section features the `cockroach cert` commands.
-
-Locally, you'll need to [create the following certificates and keys](cockroach-cert.html):
-
-- A certificate authority (CA) key pair (`ca.crt` and `ca.key`).
-- A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers.
-- A client key pair for the `root` user. You'll use this to run a sample workload against the cluster as well as some `cockroach` client commands from your local machine.
-
-{{site.data.alerts.callout_success}}Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.{{site.data.alerts.end}}
-
-1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already.
-
-2. Create two directories:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir certs
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir my-safe-directory
- ~~~
- - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes.
- - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes.
-
-3. Create the CA certificate and key:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-ca \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- \
- \
- \
- \
- localhost \
- 127.0.0.1 \
- \
- \
- \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-5. Upload the CA certificate and node certificate and key to the first node:
-
- {% if page.title contains "Google" %}
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ gcloud compute ssh \
- --project \
- --command "mkdir certs"
- ~~~
-
- {{site.data.alerts.callout_info}}
- `gcloud compute ssh` associates your public SSH key with the GCP project and is only needed when connecting to the first node. See the [GCP docs](https://cloud.google.com/sdk/gcloud/reference/compute/ssh) for more details.
- {{site.data.alerts.end}}
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ scp certs/ca.crt \
- certs/node.crt \
- certs/node.key \
- @:~/certs
- ~~~
-
- {% elsif page.title contains "AWS" %}
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ ssh-add /path/.pem
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ scp certs/ca.crt \
- certs/node.crt \
- certs/node.key \
- @:~/certs
- ~~~
-
- {% else %}
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ scp certs/ca.crt \
- certs/node.crt \
- certs/node.key \
- @:~/certs
- ~~~
- {% endif %}
-
-6. Delete the local copy of the node certificate and key:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ rm certs/node.crt certs/node.key
- ~~~
-
- {{site.data.alerts.callout_info}}
- This is necessary because the certificates and keys for additional nodes will also be named `node.crt` and `node.key`. As an alternative to deleting these files, you can run the next `cockroach cert create-node` commands with the `--overwrite` flag.
- {{site.data.alerts.end}}
-
-7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- \
- \
- \
- \
- localhost \
- 127.0.0.1 \
- \
- \
- \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-8. Upload the CA certificate and node certificate and key to the second node:
-
- {% if page.title contains "AWS" %}
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ scp certs/ca.crt \
- certs/node.crt \
- certs/node.key \
- @:~/certs
- ~~~
-
- {% else %}
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ scp certs/ca.crt \
- certs/node.crt \
- certs/node.key \
- @:~/certs
- ~~~
- {% endif %}
-
-9. Repeat steps 6 - 8 for each additional node.
-
-10. Create a client certificate and key for the `root` user:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- root \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ scp certs/ca.crt \
- certs/client.root.crt \
- certs/client.root.key \
- @:~/certs
- ~~~
-
- In later steps, you'll also use the `root` user's certificate to run [`cockroach`](cockroach-commands.html) client commands from your local machine. If you might also want to run `cockroach` client commands directly on a node (e.g., for local debugging), you'll need to copy the `root` user's certificate and key to that node as well.
-
-{{site.data.alerts.callout_info}}
-On accessing the DB Console in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-db-console-for-a-secure-cluster).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/prod-deployment/secure-initialize-cluster.md b/src/current/_includes/v22.1/prod-deployment/secure-initialize-cluster.md
deleted file mode 100644
index fc92a82b724..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/secure-initialize-cluster.md
+++ /dev/null
@@ -1,8 +0,0 @@
-On your local machine, run the [`cockroach init`](cockroach-init.html) command to complete the node startup process and have them join together as a cluster:
-
-{% include_cached copy-clipboard.html %}
-~~~ shell
-$ cockroach init --certs-dir=certs --host=
-~~~
-
-After running this command, each node prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients.
diff --git a/src/current/_includes/v22.1/prod-deployment/secure-recommendations.md b/src/current/_includes/v22.1/prod-deployment/secure-recommendations.md
deleted file mode 100644
index 528850dbbb0..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/secure-recommendations.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- Decide how you want to access your DB Console:
-
- Access Level | Description
- -------------|------------
- Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`.
- Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`.
- Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the DB Console.
diff --git a/src/current/_includes/v22.1/prod-deployment/secure-requirements.md b/src/current/_includes/v22.1/prod-deployment/secure-requirements.md
deleted file mode 100644
index 5c35b0898c8..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/secure-requirements.md
+++ /dev/null
@@ -1,11 +0,0 @@
-- You must have [CockroachDB installed](install-cockroachdb.html) locally. This is necessary for generating and managing your deployment's certificates.
-
-- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries.
-
-- Your network configuration must allow TCP communication on the following ports:
- - `26257` for intra-cluster and client-cluster communication
- - `8080` to expose your DB Console
-
-- Carefully review the [Production Checklist](recommended-production-settings.html), including supported hardware and software, and the recommended [Topology Patterns](topology-patterns.html).
-
-{% include {{ page.version.version }}/prod-deployment/topology-recommendations.md %}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/secure-scale-cluster.md b/src/current/_includes/v22.1/prod-deployment/secure-scale-cluster.md
deleted file mode 100644
index 55af10fc740..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/secure-scale-cluster.md
+++ /dev/null
@@ -1,124 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Run the [`cockroach start`](cockroach-start.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier).
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --advertise-addr= \
- --join=,, \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-5. Update your load balancer to recognize the new node.
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Create the Cockroach directory:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-5. Create a Unix user named `cockroach`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-6. Move the `certs` directory to the `cockroach` directory.
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mv certs /var/lib/cockroach/
- ~~~
-
-7. Change the ownership of the `cockroach` directory to the user `cockroach`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ chown -R cockroach /var/lib/cockroach
- ~~~
-
-8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service):
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %}
- ~~~
-
- Save the file in the `/etc/systemd/system/` directory.
-
-9. Customize the sample configuration template for your deployment:
-
- Specify values for the following flags in the sample configuration template:
-
- {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
-
-10. Repeat these steps for each additional node that you want in your cluster.
-
-
diff --git a/src/current/_includes/v22.1/prod-deployment/secure-start-nodes.md b/src/current/_includes/v22.1/prod-deployment/secure-start-nodes.md
deleted file mode 100644
index abe72cdbc39..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/secure-start-nodes.md
+++ /dev/null
@@ -1,195 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}
-After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
-{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir -p /usr/local/lib/cockroach
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-5. Run the [`cockroach start`](cockroach-start.html) command:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --advertise-addr= \
- --join=,, \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
- This command primes the node to start, using the following flags:
-
- Flag | Description
- -----|------------
- `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `node.crt` and `node.key` files for the node.
- `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking).
- `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
- `--cache` `--max-sql-memory` | Increases the node's cache size to 25% of available system memory to improve read performance. The capacity for in-memory SQL processing defaults to 25% of system memory but can be raised, if necessary, to increase the number of simultaneous client connections allowed by the node as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size).
- `--background` | Starts the node in the background so you gain control of the terminal to issue more commands.
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain [{{ site.data.products.enterprise }} features](enterprise-licensing.html). For more details, see [Locality](cockroach-start.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=:8080`. To set these options manually, see [Start a Node](cockroach-start.html).
-
-6. Repeat these steps for each additional node that you want in your cluster.
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}
-After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
-{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir -p /usr/local/lib/cockroach
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-5. Create the Cockroach directory:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-6. Create a Unix user named `cockroach`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-7. Move the `certs` directory to the `cockroach` directory.
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ mv certs /var/lib/cockroach/
- ~~~
-
-8. Change the ownership of the `cockroach` directory to the user `cockroach`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ chown -R cockroach /var/lib/cockroach
- ~~~
-
-9. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service) and save the file in the `/etc/systemd/system/` directory:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %}
- ~~~
-
-10. In the sample configuration template, specify values for the following flags:
-
- {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain [{{ site.data.products.enterprise }} features](enterprise-licensing.html). For more details, see [Locality](cockroach-start.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](cockroach-start.html).
-
-11. Start the CockroachDB cluster:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ systemctl start securecockroachdb
- ~~~
-
-11. Repeat these steps for each additional node that you want in your cluster.
-
-{{site.data.alerts.callout_info}}
-`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop securecockroachdb`
-{{site.data.alerts.end}}
-
-
diff --git a/src/current/_includes/v22.1/prod-deployment/secure-test-cluster.md b/src/current/_includes/v22.1/prod-deployment/secure-test-cluster.md
deleted file mode 100644
index cbd81488b0d..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/secure-test-cluster.md
+++ /dev/null
@@ -1,41 +0,0 @@
-CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway.
-
-When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes.
-
-Use the [built-in SQL client](cockroach-sql.html) locally as follows:
-
-1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --certs-dir=certs --host=
- ~~~
-
-2. Create a `securenodetest` database:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE securenodetest;
- ~~~
-
-3. View the cluster's databases, which will include `securenodetest`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ sql
- > SHOW DATABASES;
- ~~~
-
- ~~~
- +--------------------+
- | Database |
- +--------------------+
- | crdb_internal |
- | information_schema |
- | securenodetest |
- | pg_catalog |
- | system |
- +--------------------+
- (5 rows)
- ~~~
-
-4. Use `\q` to exit the SQL shell.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/secure-test-load-balancing.md b/src/current/_includes/v22.1/prod-deployment/secure-test-load-balancing.md
deleted file mode 100644
index ea892f8ab33..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/secure-test-load-balancing.md
+++ /dev/null
@@ -1,77 +0,0 @@
-CockroachDB comes with a number of [built-in workloads](cockroach-workload.html) for simulating client traffic. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload.
-
-{{site.data.alerts.callout_info}}
-Be sure that you have configured your network to allow traffic from the application to the load balancer. In this case, you will run the sample workload on one of your machines. The traffic source should therefore be the **internal (private)** IP address of that machine.
-{{site.data.alerts.end}}
-
-For comprehensive guidance on benchmarking CockroachDB with TPC-C, refer to [Performance Benchmarking](performance-benchmarking-with-tpcc-local.html).
-
-1. SSH to the machine where you want to run the sample TPC-C workload.
-
- This should be a machine that is not running a CockroachDB node, and it should already have a `certs` directory containing `ca.crt`, `client.root.crt`, and `client.root.key` files.
-
-1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-1. Copy the binary into the `PATH`:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-1. Use the [`cockroach workload`](cockroach-workload.html) command to load the initial schema and data, pointing it at the IP address of the load balancer:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach workload init tpcc \
- 'postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key'
- ~~~
-
-1. Use the `cockroach workload` command to run the workload for 10 minutes:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ cockroach workload run tpcc \
- --duration=10m \
- 'postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key'
- ~~~
-
- You'll see per-operation statistics print to standard output every second:
-
- ~~~
- _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
- 1s 0 1443.4 1494.8 4.7 9.4 27.3 67.1 transfer
- 2s 0 1686.5 1590.9 4.7 8.1 15.2 28.3 transfer
- 3s 0 1735.7 1639.0 4.7 7.3 11.5 28.3 transfer
- 4s 0 1542.6 1614.9 5.0 8.9 12.1 21.0 transfer
- 5s 0 1695.9 1631.1 4.7 7.3 11.5 22.0 transfer
- 6s 0 1569.2 1620.8 5.0 8.4 11.5 15.7 transfer
- 7s 0 1614.6 1619.9 4.7 8.1 12.1 16.8 transfer
- 8s 0 1344.4 1585.6 5.8 10.0 15.2 31.5 transfer
- 9s 0 1351.9 1559.5 5.8 10.0 16.8 54.5 transfer
- 10s 0 1514.8 1555.0 5.2 8.1 12.1 16.8 transfer
- ...
- ~~~
-
- After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output:
-
- ~~~
- _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result
- 600.0s 0 823902 1373.2 5.8 5.5 10.0 15.2 209.7
- ~~~
-
- {{site.data.alerts.callout_success}}
- For more `tpcc` options, use `cockroach workload run tpcc --help`. For details about other workloads built into the `cockroach` binary, use `cockroach workload --help`.
- {{site.data.alerts.end}}
-
-1. To monitor the load generator's progress, open the [DB Console](ui-overview.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup.
-
- Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes.
diff --git a/src/current/_includes/v22.1/prod-deployment/securecockroachdb.service b/src/current/_includes/v22.1/prod-deployment/securecockroachdb.service
deleted file mode 100644
index 13658ae4cce..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/securecockroachdb.service
+++ /dev/null
@@ -1,16 +0,0 @@
-[Unit]
-Description=Cockroach Database cluster node
-Requires=network.target
-[Service]
-Type=notify
-WorkingDirectory=/var/lib/cockroach
-ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25
-TimeoutStopSec=300
-Restart=always
-RestartSec=10
-StandardOutput=syslog
-StandardError=syslog
-SyslogIdentifier=cockroach
-User=cockroach
-[Install]
-WantedBy=default.target
diff --git a/src/current/_includes/v22.1/prod-deployment/synchronize-clocks.md b/src/current/_includes/v22.1/prod-deployment/synchronize-clocks.md
deleted file mode 100644
index ecd82f67d17..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/synchronize-clocks.md
+++ /dev/null
@@ -1,179 +0,0 @@
-CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node.
-
-{% if page.title contains "Digital Ocean" or page.title contains "On-Premises" %}
-
-[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well.
-
-1. SSH to the first machine.
-
-2. Disable `timesyncd`, which tends to be active by default on some Linux distributions:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo timedatectl set-ntp no
- ~~~
-
- Verify that `timesyncd` is off:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ timedatectl
- ~~~
-
- Look for `Network time on: no` or `NTP enabled: no` in the output.
-
-3. Install the `ntp` package:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo apt-get install ntp
- ~~~
-
-4. Stop the NTP daemon:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp stop
- ~~~
-
-5. Sync the machine's clock with Google's NTP service:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpd -b time.google.com
- ~~~
-
- To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines:
-
- {% include_cached copy-clipboard.html %}
- ~~~
- server time1.google.com iburst
- server time2.google.com iburst
- server time3.google.com iburst
- server time4.google.com iburst
- ~~~
-
- Restart the NTP daemon:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp start
- ~~~
-
- {{site.data.alerts.callout_info}}
- We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details.
- {{site.data.alerts.end}}
-
-6. Verify that the machine is using a Google NTP server:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpq -p
- ~~~
-
- The active NTP server will be marked with an asterisk.
-
-7. Repeat these steps for each machine where a CockroachDB node will run.
-
-{% elsif page.title contains "Google" %}
-
-Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should:
-
-- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances).
-- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details.
-
-{% elsif page.title contains "AWS" %}
-
-Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second.
-
-- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service).
- - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out.
- - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server.
-- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details.
-
-{% elsif page.title contains "Azure" %}
-
-[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems.
-
-1. SSH to the first machine.
-
-2. Find the ID of the Hyper-V Time Synchronization device:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus
- ~~~
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ python lsvmbus -vv | grep -w "Time Synchronization" -A 3
- ~~~
-
- ~~~
- VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization]
- Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee}
- Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee
- Rel_ID=12, target_cpu=0
- ~~~
-
-3. Unbind the device, using the `Device_ID` from the previous command's output:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ echo | sudo tee /sys/bus/vmbus/drivers/hv_utils/unbind
- ~~~
-
-4. Install the `ntp` package:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo apt-get install ntp
- ~~~
-
-5. Stop the NTP daemon:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp stop
- ~~~
-
-6. Sync the machine's clock with Google's NTP service:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpd -b time.google.com
- ~~~
-
- To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines:
-
- {% include_cached copy-clipboard.html %}
- ~~~
- server time1.google.com iburst
- server time2.google.com iburst
- server time3.google.com iburst
- server time4.google.com iburst
- ~~~
-
- Restart the NTP daemon:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp start
- ~~~
-
- {{site.data.alerts.callout_info}}
- We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details.
- {{site.data.alerts.end}}
-
-7. Verify that the machine is using a Google NTP server:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpq -p
- ~~~
-
- The active NTP server will be marked with an asterisk.
-
-8. Repeat these steps for each machine where a CockroachDB node will run.
-
-{% endif %}
diff --git a/src/current/_includes/v22.1/prod-deployment/terminology-vcpu.md b/src/current/_includes/v22.1/prod-deployment/terminology-vcpu.md
deleted file mode 100644
index 790ce37a2b9..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/terminology-vcpu.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-In our sizing and production guidance, 1 vCPU is considered equivalent to 1 core in the underlying hardware platform.
-{{site.data.alerts.end}}
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/topology-recommendations.md b/src/current/_includes/v22.1/prod-deployment/topology-recommendations.md
deleted file mode 100644
index 31384079cec..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/topology-recommendations.md
+++ /dev/null
@@ -1,19 +0,0 @@
-- Run each node on a separate machine. Since CockroachDB replicates across nodes, running more than one node per machine increases the risk of data loss if a machine fails. Likewise, if a machine has multiple disks or SSDs, run one node with multiple `--store` flags and not one node per disk. For more details about stores, see [Start a Node](cockroach-start.html#store).
-
-- When starting each node, use the [`--locality`](cockroach-start.html#locality) flag to describe the node's location, for example, `--locality=region=west,zone=us-west-1`. The key-value pairs should be ordered from most to least inclusive, and the keys and order of key-value pairs must be the same on all nodes.
-
-- When deploying in a single availability zone:
-
- - To be able to tolerate the failure of any 1 node, use at least 3 nodes with the [`default` 3-way replication factor](configure-replication-zones.html#view-the-default-replication-zone). In this case, if 1 node fails, each range retains 2 of its 3 replicas, a majority.
-
- - To be able to tolerate 2 simultaneous node failures, use at least 5 nodes and [increase the `default` replication factor for user data](configure-replication-zones.html#edit-the-default-replication-zone) to 5. The replication factor for [important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) is 5 by default, so no adjustments are needed for internal data. In this case, if 2 nodes fail at the same time, each range retains 3 of its 5 replicas, a majority.
-
-- When deploying across multiple availability zones:
-
- - To be able to tolerate the failure of 1 entire AZ in a region, use at least 3 AZs per region and set `--locality` on each node to spread data evenly across regions and AZs. In this case, if 1 AZ goes offline, the 2 remaining AZs retain a majority of replicas.
-
- - To ensure that ranges are split evenly across nodes, use the same number of nodes in each AZ. This is to avoid overloading any nodes with excessive resource consumption.
-
-- When deploying across multiple regions:
-
- - To be able to tolerate the failure of 1 entire region, use at least 3 regions.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/prod-deployment/use-cluster.md b/src/current/_includes/v22.1/prod-deployment/use-cluster.md
deleted file mode 100644
index 0e65c9fb94c..00000000000
--- a/src/current/_includes/v22.1/prod-deployment/use-cluster.md
+++ /dev/null
@@ -1,12 +0,0 @@
-Now that your deployment is working, you can:
-
-1. [Implement your data model](sql-statements.html).
-1. [Create users](create-user.html) and [grant them privileges](grant.html).
-1. [Connect your application](install-client-drivers.html). Be sure to connect your application to the load balancer, not to a CockroachDB node.
-1. [Take backups](take-full-and-incremental-backups.html) of your data.
-
-You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases and tables differently. For more information, see [Configure Replication Zones](configure-replication-zones.html).
-
-{{site.data.alerts.callout_danger}}
-When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/scram-authentication-recommendations.md b/src/current/_includes/v22.1/scram-authentication-recommendations.md
deleted file mode 100644
index 2ad41f75cd8..00000000000
--- a/src/current/_includes/v22.1/scram-authentication-recommendations.md
+++ /dev/null
@@ -1,4 +0,0 @@
-- Test and adjust your workloads in batches when migrating to SCRAM authentication.
-- Start by enabling SCRAM authentication in a testing environment, and test the performance of your client application against the types of workloads you expect it to handle in production before rolling the changes out to production.
-- Limit the maximum number of connections in the client driver's connection pool.
-- Limit the maximum number of concurrent transactions the client application can issue.
diff --git a/src/current/_includes/v22.1/setup/create-a-free-cluster.md b/src/current/_includes/v22.1/setup/create-a-free-cluster.md
deleted file mode 100644
index 101a57da5e0..00000000000
--- a/src/current/_includes/v22.1/setup/create-a-free-cluster.md
+++ /dev/null
@@ -1,7 +0,0 @@
-1. If you haven't already, sign up for a CockroachDB {{ site.data.products.cloud }} account.
-1. [Log in](https://cockroachlabs.cloud/) to your CockroachDB {{ site.data.products.cloud }} account.
-1. On the **Clusters** page, click **Create Cluster**.
-1. On the **Create your cluster** page, select **Serverless**.
-1. Click **Create cluster**.
-
- Your cluster will be created in a few seconds and the **Create SQL user** dialog will display.
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/setup/create-first-sql-user.md b/src/current/_includes/v22.1/setup/create-first-sql-user.md
deleted file mode 100644
index e85b983ba64..00000000000
--- a/src/current/_includes/v22.1/setup/create-first-sql-user.md
+++ /dev/null
@@ -1,8 +0,0 @@
-The **Create SQL user** dialog allows you to create a new SQL user and password.
-
-1. Enter a username in the **SQL user** field or use the one provided by default.
-1. Click **Generate & save password**.
-1. Copy the generated password and save it in a secure location.
-1. Click **Next**.
-
- By default, all new SQL users are created with full privileges. For more information and to change the default settings, refer to [Manage SQL users on a cluster](../cockroachcloud/managing-access.html#manage-sql-users-on-a-cluster).
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/setup/init-bank-sample.md b/src/current/_includes/v22.1/setup/init-bank-sample.md
deleted file mode 100644
index 77cfd76c34d..00000000000
--- a/src/current/_includes/v22.1/setup/init-bank-sample.md
+++ /dev/null
@@ -1,38 +0,0 @@
-1. Set the `DATABASE_URL` environment variable to the connection string for your cluster:
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable"
- ~~~
-
-
-
-
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- export DATABASE_URL="{connection-string}"
- ~~~
-
- Where `{connection-string}` is the connection string you copied earlier.
-
-
-
-
-1. To initialize the example database, use the [`cockroach sql`](cockroach-sql.html) command to execute the SQL statements in the `dbinit.sql` file:
-
- {% include_cached copy-clipboard.html %}
- ~~~ shell
- cat dbinit.sql | cockroach sql --url $DATABASE_URL
- ~~~
-
- The SQL statement in the initialization file should execute:
-
- ~~~
- CREATE TABLE
-
-
- Time: 102ms
- ~~~
diff --git a/src/current/_includes/v22.1/setup/sample-setup-certs.md b/src/current/_includes/v22.1/setup/sample-setup-certs.md
deleted file mode 100644
index e97f02a636e..00000000000
--- a/src/current/_includes/v22.1/setup/sample-setup-certs.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
-
-
-
-
-
-
-
-
Choose your installation method
-
-You can create a CockroachDB {{ site.data.products.serverless }} cluster using either the CockroachDB Cloud Console, a web-based graphical user interface (GUI) tool, or ccloud, a command-line interface (CLI) tool.
-
-
-
-
-
-
-
-
-### Create a free cluster
-
-{% include {{ page.version.version }}/setup/create-a-free-cluster.md %}
-
-### Create a SQL user
-
-{% include {{ page.version.version }}/setup/create-first-sql-user.md %}
-
-### Get the root certificate
-
-The **Connect to cluster** dialog shows information about how to connect to your cluster.
-
-1. Select **General connection string** from the **Select option** dropdown.
-1. Open a new terminal on your local machine, and run the **CA Cert download command** provided in the **Download CA Cert** section. The client driver used in this tutorial requires this certificate to connect to CockroachDB {{ site.data.products.cloud }}.
-
-### Get the connection string
-
-Open the **General connection string** section, then copy the connection string provided and save it in a secure location.
-
-{{site.data.alerts.callout_info}}
-The connection string is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`.
-{{site.data.alerts.end}}
-
-
-
-
-
-Follow these steps to create a CockroachDB {{ site.data.products.serverless }} cluster using the ccloud CLI tool.
-
-{{site.data.alerts.callout_info}}
-The ccloud CLI tool is in Preview.
-{{site.data.alerts.end}}
-
-
Install ccloud
-
-{% include cockroachcloud/ccloud/install-ccloud.md %}
-
-### Run `ccloud quickstart` to create a new cluster, create a SQL user, and retrieve the connection string.
-
-{% include cockroachcloud/ccloud/quickstart.md %}
-
-Select **General connection string**, then copy the connection string displayed and save it in a secure location. The connection string is the line starting `postgresql://`.
-
-~~~
-? How would you like to connect? General connection string
-Retrieving cluster info: succeeded
- Downloading cluster cert to /Users/maxroach/.postgresql/root.crt: succeeded
-postgresql://maxroach:ThisIsNotAGoodPassword@blue-dog-147.6wr.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full&sslrootcert=%2FUsers%2Fmaxroach%2F.postgresql%2Froot.crt
-~~~
-
-
-You can create a CockroachDB {{ site.data.products.serverless }} cluster using either the CockroachDB Cloud Console, a web-based graphical user interface (GUI) tool, or ccloud, a command-line interface (CLI) tool.
-
-
-
-
-
-
-
-
-### Create a free cluster
-
-{% include {{ page.version.version }}/setup/create-a-free-cluster.md %}
-
-### Create a SQL user
-
-{% include {{ page.version.version }}/setup/create-first-sql-user.md %}
-
-### Get the connection string
-
-The **Connect to cluster** dialog shows information about how to connect to your cluster.
-
-1. Select **Java** from the **Select option/language** dropdown.
-1. Select **JDBC** from the **Select tool** dropdown.
-1. Copy the command provided to set the `JDBC_DATABASE_URL` environment variable.
-
- {{site.data.alerts.callout_info}}
- The JDBC connection URL is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`.
- {{site.data.alerts.end}}
-
-
-
-
-
-Follow these steps to create a CockroachDB {{ site.data.products.serverless }} cluster using the ccloud CLI tool.
-
-{{site.data.alerts.callout_info}}
-The ccloud CLI tool is in Preview.
-{{site.data.alerts.end}}
-
-
Install ccloud
-
-{% include cockroachcloud/ccloud/install-ccloud.md %}
-
-### Run `ccloud quickstart` to create a new cluster, create a SQL user, and retrieve the connection string.
-
-{% include cockroachcloud/ccloud/quickstart.md %}
-
-Select **General connection string**, then copy the connection string displayed and save it in a secure location. The connection string is the line starting `postgresql://`.
-
-~~~
-? How would you like to connect? General connection string
-Retrieving cluster info: succeeded
- Downloading cluster cert to /Users/maxroach/.postgresql/root.crt: succeeded
-postgresql://maxroach:ThisIsNotAGoodPassword@blue-dog-147.6wr.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full&sslrootcert=%2FUsers%2Fmaxroach%2F.postgresql%2Froot.crt
-~~~
-
-
-
-
-
-
-{% include {{ page.version.version }}/setup/start-single-node-insecure.md %}
-
-
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/setup/sample-setup-parameters-certs.md b/src/current/_includes/v22.1/setup/sample-setup-parameters-certs.md
deleted file mode 100644
index d2ecc91fd78..00000000000
--- a/src/current/_includes/v22.1/setup/sample-setup-parameters-certs.md
+++ /dev/null
@@ -1,85 +0,0 @@
-
-
-
-
-
-
-
-
-
-
Choose your installation method
-
-You can install a CockroachDB {{ site.data.products.serverless }} cluster using either the CockroachDB Cloud Console, a web-based graphical user interface (GUI) tool, or ccloud, a command-line interface (CLI) tool.
-
-
-
-
-
-
-
-
-### Create a free cluster
-
-{% include {{ page.version.version }}/setup/create-a-free-cluster.md %}
-
-### Create a SQL user
-
-{% include {{ page.version.version }}/setup/create-first-sql-user.md %}
-
-### Get the root certificate
-
-The **Connect to cluster** dialog shows information about how to connect to your cluster.
-
-1. Select **General connection string** from the **Select option** dropdown.
-1. Open a new terminal on your local machine, and run the **CA Cert download command** provided in the **Download CA Cert** section. The client driver used in this tutorial requires this certificate to connect to CockroachDB {{ site.data.products.cloud }}.
-
-### Get the connection information
-
-1. Select **Parameters only** from the **Select option** dropdown.
-1. Copy the connection information for each parameter displayed and save it in a secure location.
-
-
-
-
-
-Follow these steps to create a CockroachDB {{ site.data.products.serverless }} cluster using the ccloud CLI tool.
-
-{{site.data.alerts.callout_info}}
-The ccloud CLI tool is in Preview.
-{{site.data.alerts.end}}
-
-
Install ccloud
-
-{% include cockroachcloud/ccloud/install-ccloud.md %}
-
-### Run `ccloud quickstart` to create a new cluster, create a SQL user, and retrieve the connection string.
-
-{% include cockroachcloud/ccloud/quickstart.md %}
-
-Select **Parameters only** then copy the connection parameters displayed and save them in a secure location.
-
-~~~
-? How would you like to connect? Parameters only
-Looking up cluster ID: succeeded
-Creating SQL user: succeeded
-Success! Created SQL user
- name: maxroach
- cluster: 37174250-b944-461f-b1c1-3a99edb6af32
-Retrieving cluster info: succeeded
-Connection parameters
- Database: defaultdb
- Host: blue-dog-147.6wr.cockroachlabs.cloud
- Password: ThisIsNotAGoodPassword
- Port: 26257
- Username: maxroach
-~~~
-
-
-
-You can install a CockroachDB {{ site.data.products.serverless }} cluster using either the CockroachDB Cloud Console, a web-based graphical user interface (GUI) tool, or ccloud, a command-line interface (CLI) tool.
-
-
-
-
-
-
-
-
-### Create a free cluster
-
-{% include {{ page.version.version }}/setup/create-a-free-cluster.md %}
-
-### Create a SQL user
-
-{% include {{ page.version.version }}/setup/create-first-sql-user.md %}
-
-### Get the connection information
-
-The **Connect to cluster** dialog shows information about how to connect to your cluster.
-
-1. Select **Parameters only** from the **Select option** dropdown.
-1. Copy the connection information for each parameter displayed and save it in a secure location.
-
-
-
-
-
-Follow these steps to create a CockroachDB {{ site.data.products.serverless }} cluster using the ccloud CLI tool.
-
-{{site.data.alerts.callout_info}}
-The ccloud CLI tool is in Preview.
-{{site.data.alerts.end}}
-
-
Install ccloud
-
-{% include cockroachcloud/ccloud/install-ccloud.md %}
-
-### Run `ccloud quickstart` to create a new cluster, create a SQL user, and retrieve the connection string.
-
-{% include cockroachcloud/ccloud/quickstart.md %}
-
-Select **Parameters only** then copy the connection parameters displayed and save them in a secure location.
-
-~~~
-? How would you like to connect? Parameters only
-Looking up cluster ID: succeeded
-Creating SQL user: succeeded
-Success! Created SQL user
- name: maxroach
- cluster: 37174250-b944-461f-b1c1-3a99edb6af32
-Retrieving cluster info: succeeded
-Connection parameters
- Database: defaultdb
- Host: blue-dog-147.6wr.cockroachlabs.cloud
- Password: ThisIsNotAGoodPassword
- Port: 26257
- Username: maxroach
-~~~
-
-
-
-You can install a CockroachDB {{ site.data.products.serverless }} cluster using either the CockroachDB Cloud Console, a web-based graphical user interface (GUI) tool, or ccloud, a command-line interface (CLI) tool.
-
-
-
-
-
-
-
-
-### Create a free cluster
-
-{% include {{ page.version.version }}/setup/create-a-free-cluster.md %}
-
-### Create a SQL user
-
-{% include {{ page.version.version }}/setup/create-first-sql-user.md %}
-
-### Get the connection string
-
-The **Connect to cluster** dialog shows information about how to connect to your cluster.
-
-1. Select **General connection string** from the **Select option** dropdown.
-1. Open the **General connection string** section, then copy the connection string provided and save it in a secure location.
-
- The sample application used in this tutorial uses system CA certificates for server certificate verification, so you can skip the **Download CA Cert** instructions.
-
- {{site.data.alerts.callout_info}}
- The connection string is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`.
- {{site.data.alerts.end}}
-
-
-
-
-
-Follow these steps to create a CockroachDB {{ site.data.products.serverless }} cluster using the ccloud CLI tool.
-
-{{site.data.alerts.callout_info}}
-The ccloud CLI tool is in Preview.
-{{site.data.alerts.end}}
-
-
Install ccloud
-
-{% include cockroachcloud/ccloud/install-ccloud.md %}
-
-### Run `ccloud quickstart` to create a new cluster, create a SQL user, and retrieve the connection string.
-
-{% include cockroachcloud/ccloud/quickstart.md %}
-
-Select **General connection string**, then copy the connection string displayed and save it in a secure location. The connection string is the line starting `postgresql://`.
-
-~~~
-? How would you like to connect? General connection string
-Retrieving cluster info: succeeded
- Downloading cluster cert to /Users/maxroach/.postgresql/root.crt: succeeded
-postgresql://maxroach:ThisIsNotAGoodPassword@blue-dog-147.6wr.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full&sslrootcert=%2FUsers%2Fmaxroach%2F.postgresql%2Froot.crt
-~~~
-
**Env Variable:** `COCKROACH_URL` **Default:** no URL
-`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.
`-u` | The [SQL user](create-user.html) that will own the client session.
**Env Variable:** `COCKROACH_USER` **Default:** `root`
-`--insecure` | Use an insecure connection.
**Env Variable:** `COCKROACH_INSECURE` **Default:** `false`
-`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields.
-`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.
**Env Variable:** `COCKROACH_CERTS_DIR` **Default:** `${HOME}/.cockroach-certs/`
\ No newline at end of file
diff --git a/src/current/_includes/v22.1/sql/covering-index.md b/src/current/_includes/v22.1/sql/covering-index.md
deleted file mode 100644
index 4ce5b00cf12..00000000000
--- a/src/current/_includes/v22.1/sql/covering-index.md
+++ /dev/null
@@ -1 +0,0 @@
-An index that stores all the columns needed by a query is also known as a _covering index_ for that query. When a query has a covering index, CockroachDB can use that index directly instead of doing an "index join" with the primary index, which is likely to be slower.
diff --git a/src/current/_includes/v22.1/sql/crdb-internal-partitions-example.md b/src/current/_includes/v22.1/sql/crdb-internal-partitions-example.md
deleted file mode 100644
index 680b0adf261..00000000000
--- a/src/current/_includes/v22.1/sql/crdb-internal-partitions-example.md
+++ /dev/null
@@ -1,43 +0,0 @@
-## Querying partitions programmatically
-
-The `crdb_internal.partitions` internal table contains information about the partitions in your database. In testing, scripting, and other programmatic environments, we recommend querying this table for partition information instead of using the `SHOW PARTITIONS` statement. For example, to get all `us_west` partitions of in your database, you can run the following query:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM crdb_internal.partitions WHERE name='us_west';
-~~~
-
-~~~
- table_id | index_id | parent_name | name | columns | column_names | list_value | range_value | zone_id | subzone_id
-+----------+----------+-------------+---------+---------+--------------+-------------------------------------------------+-------------+---------+------------+
- 53 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 0 | 0
- 54 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 54 | 1
- 54 | 2 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 54 | 2
- 55 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 1
- 55 | 2 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 2
- 55 | 3 | NULL | us_west | 1 | vehicle_city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 3
- 56 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 56 | 1
- 58 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 58 | 1
-(8 rows)
-~~~
-
-Other internal tables, like `crdb_internal.tables`, include information that could be useful in conjunction with `crdb_internal.partitions`.
-
-For example, if you want the output for your partitions to include the name of the table and database, you can perform a join of the two tables:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-> SELECT
- partitions.name AS partition_name, column_names, list_value, tables.name AS table_name, database_name
- FROM crdb_internal.partitions JOIN crdb_internal.tables ON partitions.table_id=tables.table_id
- WHERE tables.name='users';
-~~~
-
-~~~
- partition_name | column_names | list_value | table_name | database_name
-+----------------+--------------+-------------------------------------------------+------------+---------------+
- us_west | city | ('seattle'), ('san francisco'), ('los angeles') | users | movr
- us_east | city | ('new york'), ('boston'), ('washington dc') | users | movr
- europe_west | city | ('amsterdam'), ('paris'), ('rome') | users | movr
-(3 rows)
-~~~
diff --git a/src/current/_includes/v22.1/sql/crdb-internal-partitions.md b/src/current/_includes/v22.1/sql/crdb-internal-partitions.md
deleted file mode 100644
index ebab5abe4ed..00000000000
--- a/src/current/_includes/v22.1/sql/crdb-internal-partitions.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_success}}
-In testing, scripting, and other programmatic environments, we recommend querying the `crdb_internal.partitions` internal table for partition information instead of using the `SHOW PARTITIONS` statement. For more information, see [Querying partitions programmatically](show-partitions.html#querying-partitions-programmatically).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/sql/cursors-vs-keyset-pagination.md b/src/current/_includes/v22.1/sql/cursors-vs-keyset-pagination.md
deleted file mode 100644
index ba5391b5ace..00000000000
--- a/src/current/_includes/v22.1/sql/cursors-vs-keyset-pagination.md
+++ /dev/null
@@ -1,3 +0,0 @@
-_Cursors_ are stateful objects that use more database resources than keyset pagination, since each cursor holds open a transaction. However, they are easier to use, and make it easier to get consistent results without having to write complex queries from your application logic. They do not require that the results be returned in a particular order (that is, you don't have to include an `ORDER BY` clause), which makes them more flexible.
-
-_Keyset pagination_ queries are usually much faster than cursors since they order by indexed columns. However, in order to get that performance they require that you return results in some defined order that can be calculated by your application's queries. Because that ordering involves calculating the start/end point of pages of results based on an indexed key, they require more care to write correctly.
diff --git a/src/current/_includes/v22.1/sql/db-terms.md b/src/current/_includes/v22.1/sql/db-terms.md
deleted file mode 100644
index ecaf4745fc8..00000000000
--- a/src/current/_includes/v22.1/sql/db-terms.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-To avoid confusion with the general term "[database](https://en.wikipedia.org/wiki/Database)", throughout this guide we refer to the logical object as a *database*, to CockroachDB by name, and to a deployment of CockroachDB as a [*cluster*](architecture/glossary.html#cockroachdb-architecture-terms).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v22.1/sql/dev-schema-change-limits.md b/src/current/_includes/v22.1/sql/dev-schema-change-limits.md
deleted file mode 100644
index f778a483420..00000000000
--- a/src/current/_includes/v22.1/sql/dev-schema-change-limits.md
+++ /dev/null
@@ -1,3 +0,0 @@
-Review the [limitations of online schema changes](online-schema-changes.html#limitations). CockroachDB [doesn't guarantee the atomicity of schema changes within transactions with multiple statements](online-schema-changes.html#schema-changes-within-transactions).
-
- Cockroach Labs recommends that you perform schema changes outside explicit transactions. When a database [schema management tool](third-party-database-tools.html#schema-migration-tools) manages transactions on your behalf, include one schema change operation per transaction.
diff --git a/src/current/_includes/v22.1/sql/dev-schema-changes.md b/src/current/_includes/v22.1/sql/dev-schema-changes.md
deleted file mode 100644
index e6aad1f0361..00000000000
--- a/src/current/_includes/v22.1/sql/dev-schema-changes.md
+++ /dev/null
@@ -1 +0,0 @@
-Use a [database schema migration tool](third-party-database-tools.html#schema-migration-tools) or the [CockroachDB SQL client](cockroach-sql.html) instead of a [client library](third-party-database-tools.html#drivers) to execute [database schema changes](online-schema-changes.html).
diff --git a/src/current/_includes/v22.1/sql/enable-super-region-primary-region-changes.md b/src/current/_includes/v22.1/sql/enable-super-region-primary-region-changes.md
deleted file mode 100644
index e58c4ac917d..00000000000
--- a/src/current/_includes/v22.1/sql/enable-super-region-primary-region-changes.md
+++ /dev/null
@@ -1,23 +0,0 @@
-By default, you may not change the [primary region](set-primary-region.html) of a [multi-region database](multiregion-overview.html) when that region is part of a super region. This is a safety setting designed to prevent you from accidentally moving the data for a [regional table](regional-tables.html) that is meant to be stored in the super region out of that super region, which could break your data domiciling setup.
-
-If you are sure about what you are doing, you can allow modifying the primary region by setting the `alter_primary_region_super_region_override` [session setting](set-vars.html) to `'on'`:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SET alter_primary_region_super_region_override = 'on';
-~~~
-
-~~~
-SET
-~~~
-
-You can also accomplish this by setting the `sql.defaults.alter_primary_region_super_region_override.enable` [cluster setting](cluster-settings.html) to `true`:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SET CLUSTER SETTING sql.defaults.alter_primary_region_super_region_override.enable = true;
-~~~
-
-~~~
-SET CLUSTER SETTING
-~~~
diff --git a/src/current/_includes/v22.1/sql/enable-super-regions.md b/src/current/_includes/v22.1/sql/enable-super-regions.md
deleted file mode 100644
index 8d6cd8a4080..00000000000
--- a/src/current/_includes/v22.1/sql/enable-super-regions.md
+++ /dev/null
@@ -1,21 +0,0 @@
-To enable super regions, set the `enable_super_regions` [session setting](set-vars.html) to `'on'`:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SET enable_super_regions = 'on';
-~~~
-
-~~~
-SET
-~~~
-
-You can also set the `sql.defaults.super_regions.enabled` [cluster setting](cluster-settings.html) to `true`:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-SET CLUSTER SETTING sql.defaults.super_regions.enabled = true;
-~~~
-
-~~~
-SET CLUSTER SETTING
-~~~
diff --git a/src/current/_includes/v22.1/sql/expression-indexes-cannot-reference-computed-columns.md b/src/current/_includes/v22.1/sql/expression-indexes-cannot-reference-computed-columns.md
deleted file mode 100644
index 4c66aca7d8b..00000000000
--- a/src/current/_includes/v22.1/sql/expression-indexes-cannot-reference-computed-columns.md
+++ /dev/null
@@ -1,3 +0,0 @@
-CockroachDB does not allow {% if page.name == "expression-indexes.md" %} expression indexes {% else %} [expression indexes](expression-indexes.html) {% endif %} to reference [computed columns](computed-columns.html).
-
- [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67900)
diff --git a/src/current/_includes/v22.1/sql/expressions-as-on-conflict-targets.md b/src/current/_includes/v22.1/sql/expressions-as-on-conflict-targets.md
deleted file mode 100644
index 2b328c1e4f3..00000000000
--- a/src/current/_includes/v22.1/sql/expressions-as-on-conflict-targets.md
+++ /dev/null
@@ -1,40 +0,0 @@
-CockroachDB does not support expressions as `ON CONFLICT` targets. This means that unique {% if page.name == "expression-indexes.md" %} expression indexes {% else %} [expression indexes](expression-indexes.html) {% endif %} cannot be selected as arbiters for [`INSERT .. ON CONFLICT`](insert.html#on-conflict-clause) statements. For example:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-CREATE TABLE t (a INT, b INT, UNIQUE INDEX ((a + b)));
-~~~
-
-~~~
-CREATE TABLE
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO NOTHING;
-~~~
-
-~~~
-invalid syntax: statement ignored: at or near "(": syntax error
-SQLSTATE: 42601
-DETAIL: source SQL:
-INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO NOTHING
- ^
-HINT: try \h INSERT
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO UPDATE SET a = 10;
-~~~
-
-~~~
-invalid syntax: statement ignored: at or near "(": syntax error
-SQLSTATE: 42601
-DETAIL: source SQL:
-INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO UPDATE SET a = 10
- ^
-HINT: try \h INSERT
-~~~
-
-[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67893)
diff --git a/src/current/_includes/v22.1/sql/function-special-forms.md b/src/current/_includes/v22.1/sql/function-special-forms.md
deleted file mode 100644
index b9ac987444a..00000000000
--- a/src/current/_includes/v22.1/sql/function-special-forms.md
+++ /dev/null
@@ -1,29 +0,0 @@
-| Special form | Equivalent to |
-|-----------------------------------------------------------|---------------------------------------------|
-| `AT TIME ZONE` | `timezone()` |
-| `CURRENT_CATALOG` | `current_catalog()` |
-| `COLLATION FOR` | `pg_collation_for()` |
-| `CURRENT_DATE` | `current_date()` |
-| `CURRENT_ROLE` | `current_user()` |
-| `CURRENT_SCHEMA` | `current_schema()` |
-| `CURRENT_TIMESTAMP` | `current_timestamp()` |
-| `CURRENT_TIME` | `current_time()` |
-| `CURRENT_USER` | `current_user()` |
-| `EXTRACT( FROM )` | `extract("", )` |
-| `EXTRACT_DURATION( FROM )` | `extract_duration("", )` |
-| `OVERLAY( PLACING