diff --git a/src/current/Gemfile b/src/current/Gemfile index de20eb2bc1d..f8d0601daa8 100644 --- a/src/current/Gemfile +++ b/src/current/Gemfile @@ -13,7 +13,9 @@ gem "redcarpet", "~> 3.6" gem "rss" gem "webrick" gem "jekyll-minifier" - +gem "csv" +gem "base64" +gem "bigdecimal" group :jekyll_plugins do gem "jekyll-include-cache" gem 'jekyll-algolia', "~> 1.0", path: "./jekyll-algolia-dev" diff --git a/src/current/_includes/releases/v21.2/v21.2.0-beta.1.md b/src/current/_includes/releases/v21.2/v21.2.0-beta.1.md deleted file mode 100644 index 8a70d7d5c30..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.0-beta.1.md +++ /dev/null @@ -1,1197 +0,0 @@ -## v21.2.0-beta.1 - -Release Date: September 24, 2021 - -{{site.data.alerts.callout_danger}} -This testing release includes a known bug. We do **not** recommend upgrading to this release. The [v21.2.0-beta.2 release](#v21-2-0-beta-2) includes a fix for the bug. -{{site.data.alerts.end}} - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Backward-incompatible changes

- -- Previously, CockroachDB only supported the YMD format for parsing timestamps from strings. It now also supports the MDY format to better align with PostgreSQL. A timestamp such as `1-1-18`, which was previously interpreted as `2001-01-18`, will now be interpreted as `2018-01-01`. To continue interpreting the timestamp in the YMD format, the first number can be represented with 4 digits, `2001-1-18`. [#64381][#64381] -- The deprecated setting `cloudstorage.gs.default.key` has been removed, and the behavior of the `AUTH` parameter in Google Cloud Storage [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup) and `IMPORT` URIs has been changed. The default behavior is now that of `AUTH=specified`, which uses the credentials passed in the `CREDENTIALS` parameter, and the previous default behavior of using the node's implicit access (via its machine account or role) now requires explicitly passing `AUTH=implicit`. [#64737][#64737] -- Switched types from `TEXT` to `"char"` for compatibility with postgres in the following columns: `pg_constraint` (`confdeltype`, `confmatchtype`, `confudptype`, `contype`) `pg_operator` (`oprkind`), `pg_prog` (`proargmodes`), `pg_rewrite` (`ev_enabled`, `ev_type`), `pg_trigger` (`tgenabled`) [#65101][#65101] - -

Security updates

- -- Certain HTTP debug endpoints reserved to [`admin`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#admin-role) users now return more details about range start/end keys, such as the "hot ranges" report. [#63748][#63748] -- The [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `server.remote_debugging.mode` has been removed. The debug endpoints are now available to every client with access to the HTTP port. All the HTTP URLs previously affected by this setting already have user authentication and require a user to be logged in as a member of the [`admin`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#admin-role) role, so there was no need for an additional layer of security. [#63748][#63748] -- There is now a cache for per-user authentication-related information. The data in the cache is always kept up-to-date because it checks if any change to the underlying authentication tables has been made since the last time the cache was updated. The cached data includes the user's hashed password, the [`NOLOGIN`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#roles) role option, and the [`VALID UNTIL`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#roles) role option. [#66919][#66919] -- The `--cert-principal-map` flag now allows the certificate principal name to contain colons. [#67703][#67703] -- Added the [`admin`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#admin-role)-only debugging [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) functions `crdb_internal.read_file` and `crdb_internal.write_file` to read/write bytes from/to external storage URIs used by [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup) or [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import). [#67427][#67427] -- The certificate loader now allows the loading of [`root`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#root-user)-owned private keys, provided the owning group matches the group of the (non-`root`) user running CockroachDB. [#68182][#68182] -- Old authentication web session rows in the `system.web_sessions` table no longer accumulate indefinitely in the long run. These rows are periodically deleted. Refer to the documentation for details about the new [cluster settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) for `system.web_sessions`. [#67547][#67547] -- The error returned during a failed authentication attempt will now include the `InvalidAuthorizationSpecification` PostgreSQL error code (`28000`). [#69106][#69106] - -

General changes

- -- CockroachDB now supports new [debug endpoints](https://www.cockroachlabs.com/docs/v21.2/monitoring-and-alerting#raw-status-endpoints) to help users with troubleshooting. [#69594][#69594] -- The `kv.closed_timestamp.closed_fraction` and `kv.follower_read.target_multiple` [settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) are now deprecated and turned into no-ops. They had already stopped controlling the closing of timestamps in v21.1, but were still influencing the [`follower_read_timestamp()`](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators) computation for a timestamp that's likely to be closed on all followers. To replace them, a simpler `kv.closed_timestamp.propagation_slack` setting is introduced, modeling the delay between when a leaseholder closes a timestamp and when all the followers become aware of it (defaults conservatively to 1s). `follower_read_timestamp()` is now computed as `kv.closed_timestamp.target_duration` + `kv.closed_timestamp.side_transport_interval` + `kv.closed_timestamp.propagation_slack`, which defaults to 4.2s (instead of the previous default of 4.8s). [#69775][#69775] -- Added documentation for [Cluster API](https://www.cockroachlabs.com/docs/v21.2/cluster-api) v2 endpoints. [#62560][#62560] -- Added `crdb_internal.create_join_token()` SQL [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) function to create join tokens for use when joining new nodes to a secure cluster. This functionality is hidden behind a feature flag. [#62053][#62053] -- Added [Cluster API](https://www.cockroachlabs.com/docs/v21.2/cluster-api) v2 endpoints for querying databases, tables, users, and events in a database. [#63000][#63000] -- All SQL-level [cluster settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) have been made public. [#66688][#66688] -- The setting `kv.transaction.write_pipelining_max_outstanding_size` is now a no-op. Its function is folded into the `kv.transaction.max_intents_bytes` setting. [#66915][#66915] -- Introduced a `/_status/regions` endpoint which returns all regions along with their availability zones. [#67098][#67098] -- `crdb_internal.regions` is now accessible from a tenant. [#67098][#67098] -- The behavior for retrying jobs, which fail due to a retryable error or due to job coordinator failure, is now delayed using exponential backoff. Before this change, jobs that failed in a retryable manner would be resumed immediately on a different coordinator. This change reduces the impact of recurrently failing jobs on the cluster. This change adds two new [cluster settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) that control this behavior: `jobs.registry.retry.initial_delay` and `jobs.registry.retry.max_delay`, which respectively control initial delay and maximum delay between resumptions. [#66889][#66889] -- Previously, non-cancelable jobs, such as schema-change jobs, could fail while reverting due to transient errors, leading to unexpected results. Now, non-cancelable reverting jobs are retried instead of failing when transient errors are encountered. This mitigates the impact of temporary failures on non-cancelable reverting jobs. [#69087][#69087] -- Added new columns in the `crdb_internal.jobs` table that show the current backoff state of a job and its execution log. The execution log consists of a sequence of job start and end events and any associated errors that were encountered during each job's execution. Now users can query the internal `crdb_internal.jobs` table to get more insights about jobs through the following columns: `last_run` shows the last execution time of a job; `next_run` shows the next execution time of a job based on exponential-backoff delay; `num_runs` shows the number of times the job has been executed; and `execution_log` provides a set of events that are generated when a job starts and ends its execution. [#68995][#68995] -- When jobs encounter retryable errors during execution, they will now record these errors into their state. The errors, as well as metadata about the execution, can be inspected via the newly added `execution_errors` field of `crdb_internal.jobs`, which is a `STRING[]` column. [#69370][#69370] - -

Enterprise edition changes

- -- Added new `DEBUG_PAUSE_ON` option to [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators) jobs to allow for self pause on errors. [#69422][#69422] -- [Changefeed option](https://www.cockroachlabs.com/docs/v21.2/create-changefeed#options) values are now case insensitive. [#69217][#69217] -- Performance for [changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) during some range-split operations is now improved. [#66312][#66312] -- [Cloud storage sinks for {{ site.data.products.enterprise }} changefeeds](https://www.cockroachlabs.com/docs/v21.2/create-changefeed#cloud-storage) are no longer experimental. [#69787][#69787] -- Kafka sink URIs now accept the `topic_name` parameter to override per-table topic names. [#62377][#62377] -- [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v21.2/show-backup) now shows whether the backup is full or incremental under the `backup_type` column. [#63832][#63832] -- Previously, if a restore cluster mismatched the regions in backup cluster, the data would be restored as if the zone configuration did not exist. CockroachDB now checks the regions before restore, making users aware of mismatched regions between backup and restore clusters. If there is a mismatched region, users can either update cluster localities or restore with the `--skip-localities-check` option to continue. [#64758][#64758] -- Added `ca_cert` as a query parameter to the Confluent registry schema URL to trust custom certs on connection. [#65431][#65431] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) will now report more schema registry connection problems immediately at job creation time. [#65775][#65775] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/create-changefeed#options) can now be started with the `mvcc_timestamp` option to emit the MVCC timestamp of each row being emitted. This option is similar to the `updated` option, but the `mvcc_timestamp` will always contain the row's MVCC timestamp, even during the changefeed's initial backfill. [#65661][#65661] -- Introduced a new webhook sink (prefix `webhook-https`) to send individual changefeed messages as webhook events. [#66497][#66497] -- [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) now supports restoring individual tables into a multi-region database. If the table being restored is also multi-region, `REGIONAL BY ROW` tables cannot be restored, and `REGIONAL BY TABLE` tables can only be restored if their localities match those of the database they're being restored into. [#65015][#65015] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) don't attempt to use protected timestamps when running in a multi-tenant environment. [#67285][#67285] -- New 'on_error' option to pause on non-retryable errors instead of failing. [#68176][#68176] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/changefeeds-in-multi-region-deployments) no longer fail when started on `REGIONAL BY ROW` tables. Note that in `REGION BY ROW` tables, the `crdb_region` column becomes part of the primary index. Thus, changing an existing table to `REGIONAL BY ROW` will trigger a changefeed backfill with new messages emitted using the new composite primary key. [#68229][#68229] -- Descriptor IDs of every object are now visible in [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v21.2/show-backup), along with the descriptor IDs of the object's database and parent schema. `SHOW BACKUP` will display these IDs if the `WITH debug_ids` option is specified. [#68540][#68540] -- The [changefeed Avro format](https://www.cockroachlabs.com/docs/v21.2/use-changefeeds#avro) is no longer marked as experimental. [#68818][#68818] -- [Changefeed](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) statements now error if the provided sink URL does not contain a scheme. Such URLs are typically a mistake and will result in non-functional changefeeds. [#68978][#68978] -- Added `WITH REASON = ` to [`PAUSE JOB`](https://www.cockroachlabs.com/docs/v21.2/pause-job) to gain visibility into why a job was paused by allowing pauses to be attached to a reason string. This reason is then persisted in the payload of the job and can be queried later. [#68909][#68909] -- Because the [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) database privilege is being deprecated, CockroachDB now additionally checks for the [`CONNECT`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) privilege on the database to allow for backing up the database. Existing users with `SELECT` on the database can still back up the database, but it is now recommended to [`GRANT`](https://www.cockroachlabs.com/docs/v21.2/grant) `CONNECT` on the database. [#68391][#68391] -- Added a `webhook_sink_config` JSON option to configure batching and flushing behavior, along with retry behavior for webhook sink [changefeed](https://www.cockroachlabs.com/docs/v21.2/changefeed-sinks#webhook-sink) messages. [#68633][#68633] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/changefeed-sinks) will now error if an option is used with an incompatible sink. [#69173][#69173] -- Fixed a bug where [changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) would fail to correctly handle a primary key change. [#69234][#69234] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) now correctly account for memory during backfills and "pushback" under memory pressure—that is, slow down backfills. [#69388][#69388] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) will now slow down correctly whenever there is a slow-down in the system (i.e., downstream sink is slow). [#68288][#68288] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) will now flush the sink only when frontier advances. This eliminates unnecessary sink flushes. [#67988][#67988] -- Improved [changefeed](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) scalability, particularly when running against large tables, by reducing the rate of job progress updates. [#67815][#67815] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) can resume during backfill without losing too much progress. [#66013][#66013] - -

SQL language changes

- -- To perform [`REASSIGN OWNED BY`](https://www.cockroachlabs.com/docs/v21.2/reassign-owned), the current user running the command must now be a member of both the old and new [roles](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#roles). Previously the new owner would need: to be a member of the `CREATEDB` role if the object being changed was a database, [`CREATE` privileges](https://www.cockroachlabs.com/docs/v21.2/grant#supported-privileges) for the database if the object being changed was a schema, or `CREATE` privileges for the schema if object being changed was a table or type. [#69382][#69382] -- The [SQL stats compaction job](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer#table-statistics) now only shows up in the output of [`SHOW AUTOMATIC JOBS`](https://www.cockroachlabs.com/docs/v21.2/show-jobs). [#69641][#69641] -- `DROP`s, `RENAME`s, and other light schema changes are no longer user cancelable to avoid scenarios that do not properly rollback. [#69328][#69328] -- Users can now opt to disable auto-rehoming for a session by setting `on_update_rehome_row_enabled = false`. This can be permanently unset by default using the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.defaults.on_update_rehome_row.enabled`. [#69626][#69626] -- It is now possible to alter the owner of the [`crdb_internal_region` type](https://www.cockroachlabs.com/docs/v21.2/set-locality#crdb_region), which is created by initiating a [multi-region database](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview). [#69722][#69722] -- Added more detail to the error message users receive when they call [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v21.2/show-backup) with a path pointing to the root of the collection, rather than a specific [backup](https://www.cockroachlabs.com/docs/v21.2/backup) in the collection. [#69638][#69638] -- Introduced a new cluster setting `sql.stats.persisted_rows.max` and increased its default value to `1000000` (1,000,000 rows). [#69667][#69667] -- Previously, users had no way of determining which objects in their database utilized deprecated features like interleaved indexes/tables or cross-database references. Added crdb_internal tables `cross_db_references`, `interleaved_indexes`, and `interleaved_tables` for detecting these deprecated features within a given database. [#61629][#61629] -- The `sql.defaults.vectorize_row_count_threshold` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings), as well as the corresponding `vectorize_row_count_threshold` session variable, have been removed. From now on, CockroachDB will behave exactly as if these were set to `0` (last default value). [#62164][#62164] -- Updated `crdb_internal.interleaved` to add the `parent_table_name` column replacing the `parent_index_name` column. [#62076][#62076] -- CockroachDB now references sequences used in views by their IDs to allow these sequences to be renamed. [#61439][#61439] -- Added `SQLType` to classify `DDL`, `DML`, `DCL`, or `TCL` statement types. [#62989][#62989] -- Implemented the geometry-based [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) `ST_IsValidTrajectory`. [#63072][#63072] -- CockroachDB now accepts UUID inputs that have a hyphen after any group of four digits. This aligns with the UUID format used by PostgreSQL. For example, `a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11` is now considered a valid UUID. [#63137][#63137] -- The `gen_ulid` and `uuid_ulid_to_string` [built-ins](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) are now available for use. [#62440][#62440] -- Single-key spans in [`EXPLAIN`](https://www.cockroachlabs.com/docs/v21.2/explain) and `EXPLAIN (DISTSQL)` are now shown without a misleading dash after them. [#61583][#61583] -- Enabled locality-optimized search in the row execution engine. [#63384][#63384] -- CockroachDB now should be more stable when executing queries with subqueries producing many rows (previously it could OOM crash; it wil now use the temporary disk storage). [#63900][#63900] -- Correlated common table expressions (CTEs) can now be used. [#63956][#63956] -- Introduced a new session variable `distsql_workmem` that determines how much RAM a single operation of a single query can take before the operation must spill to disk. This is identical to the `sql.distsql.temp_storage.workmem` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) but has the session-level scope. [#63959][#63959] -- `crdb_internal.node_statement_statistics` now stores `statement_id`. [#64076][#64076] -- Added line to the [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.2/explain-analyze) output to show disk spill usage to make it clear when disk spilling occurs executing a query. This output is only shown when the disk usage is greater than zero. The verbiage in the DistSQL `EXPLAIN` diagrams changed from `max scratch disk allocated` to `max sql temp disk usage` for consistency and to match the way we talk about SQL spill disk usage elsewhere. [#64137][#64137] -- Previously, committing a transaction when a portal was suspended would cause a "multiple active portals not supported" error. Now, the portal is automatically destroyed. [#63677][#63677] -- Bulk IO operations are no longer included in service latency metrics. [#64442][#64442] -- Collated strings may now have a locale that is a language tag, followed by a `-u-` suffix, followed by anything else. For example, any locale with a prefix of `en-US-u-` is now considered valid. [#64695][#64695] -- Added new tables to `pg_catalog`: `pg_partitioned_table`, `pg_replication_origin_status`, `pg_init_privs`, `pg_replication_slots`, `pg_policy`, `pg_sequences`, `pg_subscription_rel`, `pg_largeobject_metadata`. [#64035][#64035] -- Added new columns to `pg_catalog` tables: `pg_class` (`relminmxid`), `pg_constraint` (`conparentid`). [#64035][#64035] -- Using the table name as a projection now works, e.g., `SELECT table_name FROM table_name` or `SELECT row_to_json(table_name) FROM table_name`. [#64748][#64748] -- Added `generate_series` for `TIMESTAMPTZ` values. [#64887][#64887] -- Added the `sql.defaults.require_explicit_primary_keys.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) for requiring explicit primary keys in [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.2/create-table) statements. [#64951][#64951] -- SQL service latency now only includes metrics from DML statements. [#64893][#64893] -- Added support for using `OPERATOR(operator)` for binary expressions. This only works for in-built CockroachDB operators. [#64701][#64701] -- Introduced the `OPERATOR` syntax for unary operators. This only works for unary operators usable in CockroachDB. [#64701][#64701] -- Implemented the geometry [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) function `ST_LineCrossingDirection`. [#64997][#64997] -- Added the `pg_relation_is_updatable` and `pg_column_is_updatable` [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) functions [#64788][#64788] -- `information_schema.columns.data_type` now returns "`USER-DEFINED`" when the column is a user-defined type (e.g., [`ENUM`](https://www.cockroachlabs.com/docs/v21.2/enum)) [#65154][#65154] -- CockroachDB now supports the scalar functions `get_byte()` and `set_byte()` as in PostgreSQL. [#65189][#65189] -- CockroachDB now supports converting strings of hexadecimal digits prefixed by `x` or `X` to a BIT value, in the same way as PostgreSQL. Note that only the conversion via casts is supported (e.g., `'XAB'::BIT(8)`); PostgreSQL's literal constant syntax (e.g., `X'AB'::BIT(8)`) continues to have different meaning in CockroachDB (byte array) due to historical reasons. [#65188][#65188] -- [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v21.2/show-jobs) now shows the `trace_id` of the trace that is associated with the current execution of the job. This allows pulling inflight traces for a job for debugging purposes. [#65322][#65322] -- The [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.2/vectorized-execution) now supports the ntile window function. [#64977][#64977] -- Pg_sequences table was implemented on pg_catalog [#65420][#65420] -- A constant can now be cast to regclass without first converting the constant to an OID. E.g., 52::regclass can now be done instead of 52::oid::regclass. [#65432][#65432] -- References to WITH expressions from correlated subqueries are now always supported. [#65550][#65550] -- Added new [`SHOW CHANGEFEED JOBS`](https://www.cockroachlabs.com/docs/v21.2/show-jobs) command with additional information about changefeeds for improved user visibility. [#64956][#64956] -- This is strictly a change for docgen and sql grammar. Now all sql.y statements (excluding those that are unimplemented or specified to be skipped) will have automatically have a stmtSpec defined for them and thus will have a bnf and svg file automatically generated in cockroachdb/generated-diagrams. [#65278][#65278] -- The [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.2/vectorized-execution) now supports the `lag` and `lead` window functions. [#65634][#65634] -- The total number of statement/transaction fingerprints stored in-memory can now be limited using the [cluster settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.metrics.max_mem_stmt_fingerprints` and `sql.metrics.max_mem_txn_fingerprints`. [#65902][#65902] -- Previously, SQL commands that were sent during the PostgreSQL extended protocol that were too big would error opaquely. This is now resolved by returning a friendlier error message. [#57590][#57590] -- Namespace entries may no longer be queried via `system.namespace2`. [#65340][#65340] -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.2/explain-analyze) output now includes, for each plan step, the total time spent waiting for KV requests as well as the total time those KV requests spent contending with other transactions. [#66157][#66157] -- Added the [`SHOW CREATE DATABASE`](https://www.cockroachlabs.com/docs/v21.2/show-create) command to get database metadata. [#66033][#66033] -- Added `sample_plan`, `database_name`, and `exec_node_ids` columns to the `crdb_internal.node_statement_statistics` table. This allows for third-party and partner consumption of this data. [#65782][#65782] -- Implemented `pg_rewrite` for table-view dependencies. Table-view dependencies are no longer stored directly in `pg_depend`. [#65495][#65495] -- Added some virtual tables `crdb_internal.(node|cluster)_distsql_flows` that expose the information about the flows of the DistSQL execution scheduled on remote nodes. These tables do not include information about the non-distributed queries or about local flows (from the perspective of the gateway node of the query). [#65727][#65727] -- The use order of columns in a foreign key no longer needs to match the order the columns were defined for the reference table's unique constraint. [#65209][#65209] -- Previously, in some special cases ([`UPSERT`](https://www.cockroachlabs.com/docs/v21.2/upsert)s, as documented in [this issue](https://github.com/cockroachdb/docs/issues/9922)), the support of the distinct operations was missing in the [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.2/vectorized-execution). This has been added, and such operations will be able to spill to disk if necessary. However, in case the distinct operator does, in fact, spill to disk, there is a slight complication. The order in which rows are inserted can be non-deterministic: for example, for a query such as `INSERT INTO t VALUES (1, 1), (1, 2), (1, 3) ON CONFLICT DO NOTHING`, with `t` having the schema `a INT PRIMARY KEY, b INT`, it is possible that any of the three rows are actually inserted. PostgreSQL appears to have the same behavior. [#61582][#61582] -- Added three new views to the `crdb_internal schema` to support developers investigating contention events: `cluster_contended_{tables, indexes, keys}`. [#66370][#66370] -- Implemented `similar_escape` and made `similar_to_escape` compatible with PostgreSQL. [#66578][#66578] -- The `"char"` column type will now truncate long values, in line with PostgreSQL. [#66422][#66422] -- The contents of the statistics table in the information schema have changed; therefore, so have the results of [`SHOW INDEX`](https://www.cockroachlabs.com/docs/v21.2/show-index) and [`SHOW COLUMNS`](https://www.cockroachlabs.com/docs/v21.2/show-columns). A column that is not in the primary key will now be listed as belonging to the primary index as a stored column. Previously, it was simply not listed as belonging to the primary index. [#66599][#66599] -- Adding empty missing tables on `information_schema` for compatibility: `attributes`, `check_constraint_routine_usage`, `column_column_usage`, `column_domain_usage`, `column_options`, `constraint_table_usage`, `data_type_privileges`, `domain_constraints`, `domain_udt_usage`, `domains`, `element_types`, `foreign_data_wrapper_options`, `foreign_data_wrappers`, `foreign_server_options`, `foreign_servers`, `foreign_table_options`, `foreign_tables`, `information_schema_catalog_name`, `role_column_grants`, `role_routine_grants`, `role_udt_grants`, `role_usage_grants`, `routine_privileges`, `sql_features`, `sql_implementation_info`, `sql_parts`, `sql_sizing`, `transforms`, `triggered_update_columns`, `triggers`, `udt_privileges`, `usage_privileges`, `user_defined_types`, `user_mapping_options`, `user_mappings`, `view_column_usage`, `view_routine_usage`, `view_table_usage`. [#65854][#65854] -- The [`SHOW QUERIES`](https://www.cockroachlabs.com/docs/v21.2/show-statements) command was extended for prepared statements to show the actual values in use at query time, rather than the previous `$1`, `$2`, etc. placeholders. We expect showing these values will greatly improve the experience of debugging slow queries. [#66689][#66689] -- Populated the `pg_type` table with entries for each table. Also populated `pg_class.reltypid` with the corresponding `oid` in the `pg_type` table. [#66815][#66815] -- Added a virtual table `crdb_internal.cluster_inflight_traces` which surfaces cluster-wide inflight traces for the `trace_id` specified via an index constraint. The output of this table is not appropriate to consume over a SQL connection; follow-up changes will add CLI wrappers to make the interaction more user-friendly. [#66679][#66679] -- Added support for `iso_8601` and `sql_standard` as usable session variables in `IntervalStyle`. Also added a `sql.defaults.intervalstyle` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) to be used as the default interval style. [#67000][#67000] -- Added a [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.defaults.primary_region`, which assigns a `PRIMARY REGION` to a database by default. [#67168][#67168] -- Introduced a [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.allow_drop_final_region.enabled` which disallows dropping of a `PRIMARY REGION` (the final region of a database). [#67168][#67168] -- [`IMPORT TABLE`](https://www.cockroachlabs.com/docs/v21.2/import) will be deprecated in v21.2 and removed in a future release. Users should create a table using [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.2/create-table) and then [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) the newly created table. [#67275][#67275] -- CockroachDB now uses `JSONB` instead of `BYTES` to store statement plans in `system.statement_statistics`. [#67331][#67331] -- Reduced instantaneous memory usage during scans by up to 2x. [#66376][#66376] -- Added the session variable `backslash_quote` for PostgreSQL compatibility. Setting this does a no-op, and only `safe_encoding` is supported. [#67343][#67343] -- Introduced a `crdb_internal.regions` table which contains data on all regions in the cluster. [#67098][#67098] -- When parsing intervals, `IntervalStyle` is now taken into account. In particular, `IntervalStyle = 'sql_standard'` will make all interval fields negative if there is a negative symbol at the front, e.g., `-3 years 1 day` would be `-(3 years 1 day)` in `sql_standard` and `-3 days, 1 day` in PostgreSQL `DateStyle`. [#67210][#67210] -- `ROLLBACK TO SAVEPOINT` can now be used to recover from `LockNotAvailable` errors (`pgcode` `55P03`), which are returned when performing a [`FOR UPDATE SELECT`](https://www.cockroachlabs.com/docs/v21.2/select-for-update) with a `NOWAIT` wait policy. [#67514][#67514] -- Added tables to `information_schema` that are present on MySQL. The tables are not populated and are entirely empty. `column_statistics`, `columns_extensions`, `engines`, `events`, `files`, `keywords`, `optimizer_trace`, `partitions`, `plugins`, `processlist`, `profiling`, `resource_groups`, `schemata_extensions`, `st_geometry_columns`, `st_spatial_reference_systems`, `st_units_of_measure`, `table_constraints_extensions`, `tables_extensions`, `tablespaces`, `tablespaces_extensions`, `user_attributes`. [#66795][#66795] -- Added new [built-ins](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) `compress(data, codec)` and `decompress(data, codec)` which can compress and decompress bytes with the specified codec. `Gzip` is the only currently supported codec. [#67426][#67426] -- The column types of the results of `SHOW LAST QUERY STATISTICS` (an undocumented statement meant mostly for internal use by CockroachDB's SQL shell) has been changed from `INTERVAL` to `STRING`. They are populated by the durations of the various phases of executions as if the duration, as an `INTERVAL`, was converted to `STRING` using the '`postgres`' `IntervalStyle`. This ensures that the server-side execution timings are always available regardless of the value of the `IntervalStyle` session variable. [#67654][#67654] -- Changed `information_schema.routines` data types at columns `interval_precision`, `result_cast_char_octet_length` and `result_cast_datetime_precision` to `INT`. [#67641][#67641] -- Added syntax for granting and revoking privileges for all the tables in the specified schemas. New supported syntax: [`GRANT {privileges...} ON ALL TABLES IN SCHEMA {schema_names...} TO {roles...}`](https://www.cockroachlabs.com/docs/v21.2/grant); [`REVOKE {privileges...} ON ALL TABLES IN SCHEMA {schema_names...} TO {roles...}`](https://www.cockroachlabs.com/docs/v21.2/revoke). This command is added for PostgreSQL compatibility. [#67509][#67509] -- Added `pg_stat_database` and `pg_stat_database_conflicts` to `pg_catalog`. [#66687][#66687] -- A database that is restored with the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.defaults.primary_region` will now have the `PRIMARY REGION` from the cluster setting assigned to the database. [#67581][#67581] -- The [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.2/vectorized-execution) now supports `CASE` expressions that output `BYTES`-like types. [#66399][#66399] -- Introduced the `with_min_timestamp` and `with_max_staleness` [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) functions. In a `SELECT` clause, they return the same timestamp and `(now() - interval)`, but are intended for use in `AS OF SYSTEM TIME`, which will appear in an upcoming update. [#67697][#67697] -- `first_value`, `last_value`, and `nth_value` window functions can now be executed in the [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.2/vectorized-execution). This allows for faster execution time, and also removes the need for conversions to and from row format. [#67764][#67764] -- Improved performance of lookup joins in some cases. If join inequality conditions can be matched to index columns, CockroachDB now includes the conditions in the index lookup spans and removes them from the runtime filters. [#66002][#66002] -- Added support for `ALTER DEFAULT PRIVILEGES` and default [privileges](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) stored on databases. All objects created in a database will have the privilege set defined by the default privileges for that type of object on the database. The types of objects are `TABLES`, `SEQUENCES`, `SCHEMAS`, `TYPES`. Example: `ALTER DEFAULT PRIVILEGES GRANT SELECT ON TABLES TO foo` makes it such that all tables created by the user that executed the `ALTER DEFAULT PRIVILEGES` command will have [`SELECT` privilege](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) on the table for user `foo`. Additionally, one can specify a [role](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#roles). Example: `ALTER DEFAULT PRIVILEGES FOR ROLE bar GRANT SELECT ON TABLES TO foo`. All tables created by bar will have [`SELECT` privilege](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) for `foo`. If a role is not specified, it uses the current user. For further context, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-alterdefaultprivileges.html). Currently, default privileges are not supported on the schema. Specifying a schema like `ALTER DEFAULT PRIVILEGES IN SCHEMA s` will error. `WITH GRANT OPTION` is ignored. `GRANT OPTION FOR` is also ignored. [#66785][#66785] -- Introduced a `nearest_only` argument for `with_min_timestamp`/`with_max_staleness`, which enforces that bounded staleness reads only talk to the nearest replica. [#67837][#67837] -- [`CREATE TABLE LIKE`](https://www.cockroachlabs.com/docs/v21.2/create-table) now copies hidden columns over. [#67799][#67799] -- Populated `pg_catalog.pg_default_acl`. This is important for tracking which default [privileges](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) are defined in the database. `pg_catalog.pg_default_acl` has 5 columns: Privileges are represented by chars in the `aclitem[]` representation. See the PostgreSQL documentation for the [table of PostgreSQL-supported privileges and their `char` representations](https://www.postgresql.org/docs/current/ddl-priv.html#PRIVILEGES-SUMMARY-TABLE) and the [PostgreSQL definition of `pg_catalog.pg_default_acl`](https://www.postgresql.org/docs/13/catalog-pg-default-acl.html). [#67872][#67872] -- An earlier commit changed CockroachDB to use the value of the `IntervalStyle` session var when interpreting interval to string conversions. However, this made `string::interval` and `interval::string` casts have a volatility of "stable" instead of "immutable". This has ramifications for items such as computed columns and check clauses, which cannot use immutable expressions. This means that the particular results returned by these queries can become incoherent when `IntervalStyle` is customized to a different value from its default, `postgres`. In order to provide guardrails against this incoherence, CockroachDB now provides a new, separate configuration knob called `intervalstyle_enabled` that applications can use to "opt in" the ability to customize IntervalStyle:The knob works as follows: The *primary* configuration mechanism is a new [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) called `sql.defaults.intervalstyle.enabled`. This is the knob that operators and DBAs should customize manually. Then, as a *secondary* configuration mechanism, the value of the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) is also copied to each SQL session as a new session var `intervalstyle_enabled`. This is a performance optimization. SQL apps should not modify this session var directly, except for temporary testing purposes. In v22.1, upgrades will be disabled if these stable expressions are found in computed columns, check clauses, etc. [#67792][#67792] -- Previously, `OPERATOR(+)int` would simplify to `+int` when parsed, which would lead to re-reproducibility issues when considering order of operators. This is now fixed by leaving `OPERATOR(+)` in the tree. [#68041][#68041] -- Previously, pretty printing could fold some `OPERATOR` expressions based on the order of operations of the operator inside the `OPERATOR`. This can lead to a different order of operations when reparsing, so this is fixed by never folding `OPERATOR` expressions. [#68041][#68041] -- Added the `SHOW CREATE SCHEDULE` command to view SQL statements used to create existing schedules. [#66782][#66782] -- Created a [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) to reset the zone configurations of multi-region tables. This built-in can be helpful in cases where the user has overridden the [zone configuration](https://www.cockroachlabs.com/docs/v21.2/configure-replication-zones) for a given table and wishes to revert back to the original system-specified state. [#67985][#67985] -- Implemented the `parse_interval` and `to_char_with_style` [built-ins](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions), which convert strings from/to intervals with immutable volatility. [#67970][#67970] -- Casting from interval to string or vice-versa is now blocked for computed columns, partial indexes, and partitions when the `intervalstyle_enabled` session setting is enabled. Instead, using `to_char_with_style(interval, style)` or `parse_interval(interval, intervalstyle)` is available as a substitute. This is enforced in v22.1, but is opt-in for v21.2. It is recommended to set the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.defaults.intervalstyle_enabled` to `true` to avoid surprises when upgrading to v22.1. [#67970][#67970] -- Common aggregate functions can now be executed in the [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.2/vectorized-execution). This allows for better memory accounting and faster execution in some cases. [#68081][#68081] -- Retry information has been added to the statement trace under the `exec stmt` operation. The trace message is in the format: "executing after retries, last retry reason: ". This message will appear in any operations that show the statement trace, which is included in operations such as `SHOW TRACE FOR SESSION` and is also exported in the [statement diagnostics bundle](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#diagnostics). [#67941][#67941] -- Added empty `pg_stat*` tables on `pg_catalog`: `pg_stat_all_indexes`, `pg_stat_all_tables`, `pg_stat_archiver`, `pg_stat_bgwriter`, `pg_stat_gssapi`, `pg_stat_progress_analyze`, `pg_stat_progress_basebackup`, `pg_stat_progress_cluster`, `pg_stat_progress_create_index`, `pg_stat_progress_vacuum`, `pg_stat_replication`, `pg_stat_slru`, `pg_stat_ssl`, `pg_stat_subscription`, `pg_stat_sys_indexes`, `pg_stat_sys_tables`, `pg_stat_user_functions`, `pg_stat_user_indexes`, `pg_stat_user_tables`, `pg_stat_wal_receiver`, `pg_stat_xact_all_tables`, `pg_stat_xact_sys_tables`, `pg_stat_xact_user_functions`, `pg_stat_xact_user_tables`, `pg_statio_all_indexes`, `pg_statio_all_sequences`, `pg_statio_all_tables`, `pg_statio_sys_indexes`, `pg_statio_sys_sequences`, `pg_statio_sys_tables`, `pg_statio_user_indexes`, `pg_statio_user_sequences`, `pg_statio_user_tables`. [#67947][#67947] -- Added syntax for [`ALTER ROLE ... SET`](https://www.cockroachlabs.com/docs/v21.2/alter-role) statements. The business logic for these statements is not yet implemented, but will be added in a later commit. The following forms are supported: `ALTER ROLE { name | ALL } [ IN DATABASE database_name ] SET var { TO | = } { value | DEFAULT }`, `ALTER ROLE { name | ALL } [ IN DATABASE database_name ] RESET var`, `ALTER ROLE { name | ALL } [ IN DATABASE database_name ] RESET ALL`. As with other statements, the keywords `ROLE` and `USER` are interchangeable. This matches the [PostgreSQL syntax](https://www.postgresql.org/docs/13/sql-alterrole.html). [#68001][#68001] -- [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup) now supports backing up tables in a specified schema (e.g., `BACKUP my_schema.*`, or `my_db.my_schema.*`). Schemas will be resolved before databases, so `my_object.*` will resolve to a schema of that name in the current database before matching a database with that name. [#67649][#67649] -- Added support for a new index hint, `NO_ZIGZAG_JOIN`, which will prevent the optimizer from planning a zigzag join for the specified table. The hint can be used in the same way as other existing index hints. For example, `SELECT * FROM table_name@{NO_ZIGZAG_JOIN};`. [#68141][#68141] -- Added a `cardinality` [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) function that returns the total number of elements in a given array. [#68263][#68263] -- Introduced a new [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `jobs.trace.force_dump_mode` that allows users to configure Traceable jobs to dump their traces: - - `never`: Job will never dump its traces. - - `onFail`: Job will dump its trace after transitioning to the `failed` state. - - `onStatusChange`: Job will dump its trace whenever it transitions from paused, canceled, succeeded or failed state. [#67386][#67386] -- DMY and YMD `DateStyles` are now supported. [#68093][#68093] -- When the date value is out of range, a hint now suggests that the user try a different `DateStyle`. [#68093][#68093] -- When `DateStyle` and `IntervalStyle` are updated, this will now send a `ParamStatusUpdate` over the wire protocol. [#68093][#68093] -- Added support to alter default [privileges](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) for all roles. The syntax supported is `ALTER DEFAULT PRIVILEGES FOR ALL ROLES grant_default_privs_stmt/revoke_default_privs_stmt.` Only [`admin`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#admin-role) users are able to execute this. This allows adding default privileges for objects that are created by ANY role, as opposed to having to specify a creator role to which the default privileges will apply when creating an object. Example: `ALTER DEFAULT PRIVILEGES FOR ALL ROLES GRANT SELECT ON TABLES TO foo;`. Regardless of whichever user now creates a table in the current database, `foo` will have `SELECT`. [#68076][#68076] -- Added a `crdb_internal.reset_multi_region_zone_configs_for_database` [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) to reset the zone configuration of a multi-region database. This built-in can be helpful in cases where the user has overridden the zone configuration for a given database and wishes to revert back to the original system-specified state. [#68280][#68280] -- The session setting `optimizer_improve_disjunction_selectivity` and its associated [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.defaults.optimizer_improve_disjunction_selectivity.enabled` are no longer supported. They were added in v21.1.7 to enable better optimizer selectivity calculations for disjunctions. This logic is now always enabled. [#68349][#68349] -- Running [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v21.2/alter-role) on any role that is a member of [`admin`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#admin-role) now requires the `admin` role. Previously, any user with the `CREATEROLE` option could `ALTER` an `admin`. [#68187][#68187] -- Introduced an `hlc_to_timestamp` [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions), which converts a CockroachDB HLC to a `TIMESTAMPTZ`. This is useful for pretty printing `crdb_internal_mvcc_timestamp` or `cluster_logical_timestamp()`, but is not useful for accuracy. [#68360][#68360] -- Added a `crdb_internal.default_privileges` table that is useful for getting a human-readable way of examining default [privileges](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges). [#67997][#67997] -- Added support for `SHOW DEFAULT PRIVILEGES` and `SHOW DEFAULT PRIVILEGES FOR ROLE ...`. If a [role(s)](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#roles) is not specified, default [privileges](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) are shown for the current role. `SHOW DEFAULT PRIVILEGES` returns the following columns: `schema_name`, `role`, `object_type`, `grantee`, `privilege_type`. [#67997][#67997] -- The `pg_db_role_setting` table of the `pg_catalog` is now implemented. When [`ALTER ROLE ... SET var`](https://www.cockroachlabs.com/docs/v21.2/alter-role) is used to configure per-role defaults, these default settings will be populated in `pg_db_role_setting`. This table contains the same data no matter which database the current session is using. For more context, see the [PostgreSQL documentation](https://www.postgresql.org/docs/13/catalog-pg-db-role-setting.html). [#68245][#68245] -- Removed the `count` column from the `system.statement_statistics` and `system.transaction_statistics` tables. [#67866][#67866] -- Introduced the `crdb_internal.index_usage_statistics` virtual table to surface index usage statistics. The `sql.metrics.index_usage_stats.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) can be used to turn on/off the subsystem. It defaults to `true`. [#66640][#66640] -- The `bulkio.backup.proxy_file_writes.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) is no longer needed to enable proxied writes, which are now the default. [#68468][#68468] -- Default session variable settings configured by [`ALTER ROLE ... SET`](https://www.cockroachlabs.com/docs/v21.2/alter-role) are now supported. The following order of precedence is used for variable settings:
  1. Settings specified in the connection URL as a query parameter
  2. Per-role and per-database settings configured by `ALTER ROLE`
  3. Per-role and all-database settings configured by `ALTER ROLE`
  4. All-role and per-database settings configured by `ALTER ROLE`
  5. All-role and all-database settings configured by `ALTER ROLE`
`RESET` does not validate the setting name. `SET` validates both the name and the proposed default value. Note that the default settings for a role are not inherited if one role is a member of another role that has default settings. Also, the defaults _only_ apply during session initialization. Using `SET DATABASE` to change databases does not apply default settings for that database. The `public`, `admin`, and `root` [roles](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#roles) cannot have default session variables configured. The `root` role also will never use the "all-role" default settings. This is so that `root` has fewer dependencies during session initialization and to make it less likely for `root` authentication to become unavailable during the loss of a node. Changing the default settings for a role requires the role running the `ALTER` command to either be an `ADMIN` or to have the `CREATEROLE` role option. Only `ADMIN`s can edit the default settings for another admin. Futhermore, changing the default settings for `ALL` roles is _only_ allowed for `ADMIN`s. Roles without `ADMIN` or `CREATEROLE` _cannot_ change the default settings for themselves. [#68128][#68128] -- An earlier commit changed CockroachDB to use the value of the `DateStyle` session var when interpreting date to string conversions (and vice-versa). However, this made `string::{date,timestamp}` and `{date,timetz,time,timestamp}::string` casts have a volatility of "stable" instead of "immutable". This has ramifications for items such as computed columns and check clauses, which cannot use immutable expressions. This means that the particular results returned by these queries can become incoherent when `DateStyle` is customized to a different value from its default, `ISO,MDY`. In order to provide guardrails against this incoherence, CockroachDB now provides a new, separate configuration knob called `datestyle_enabled` that applications can use to "opt in" the ability to customize DateStyle:
  • By default, this knob is false and applications cannot customize `DateStyle`. Invalid conversions that are already stored in the schema are unaffected and continue to produce results as per the default style `postgres`.
  • When the knob is true, these things happen:
    • Apps can start customizing `DateStyle` in SQL sessions.
    • A SQL session that has a custom `DateStyle` will start observing *incoherent results* when accessing tables that already contain the aforementioned casts.
    • New schemas cannot have the above casts.
The knob works as follows: The *primary* configuration mechanism is a new [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) called `sql.defaults.datestyle.enabled`. This is the knob that operators and DBAs should customize manually. Then as a *secondary* configuration mechanism, the value of the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) is also copied to each SQL session as a new session var `datestyle_enabled`. This is a performance optimization. SQL apps should not modify this session var directly, except for temporary testing purposes. In v22.1, upgrades will be disabled if these stable expressions are found in computed columns, check clauses, etc. [#68352][#68352] -- Introduced `parse_interval` and `to_char`, which takes in 1 string or interval and assumes the PostgreSQL `IntervalStyle` to make its output. [#68351][#68351] -- `parse_timestamp` now has a two-argument variant, which takes in a `DateStyle` and parses timestamps according to that `DateStyle`. The one argument version assumes MDY. These [built-ins](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) have an immutable volatility. [#68351][#68351] -- Introduced a `timestamp,DateStyle` variant to `to_char_with_style`, which converts timestamps to a string with an immutable volatility. There is also a 1 arg `to_char` for timestamp values which assumes the `ISO,MDY` output style. [#68351][#68351] -- Introduced a `parse_date` [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) with two variants. The single-argument variant parses a date and assumes `ISO,MDY` datestyle, and the two-argument variant parses a date assuming the `DateStyle` variant on the second argument. This provides an immutable way of casting strings to dates. [#68351][#68351] -- Introduced `to_char(date)`, which assumes a `DateStyle` of `ISO,MDY` and outputs date in that format. There is also a `to_char_with_style(date, DateStyle)` variant which outputs the date in the chosen `DateStyle`. This provides an immutable way of casting dates to strings. [#68351][#68351] -- Implemented the `parse_time` and `parse_timetz` [built-ins](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions), which parses a [`TIME` or `TIMETZ`](https://www.cockroachlabs.com/docs/v21.2/time) with immutable volatility. [#68351][#68351] -- Some queries with lookup joins and/or top K sorts are now more likely to be executed in "local" manner with the `distsql=auto` session variable. [#68524][#68524] -- SQL stats now can be persisted into `system.statement_statistics` and `system.transaction_statistics` tables by enabling the `sql.stats.flush.enable` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings). The interval of persistence is determined by the new `sql.stats.flush.interval` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings), which defaults to 1 hour. [#67090][#67090] -- The `lock_timeout` session variable is now supported. The configuration can be used to abort a query with an error if it waits longer than the configured duration while blocking on a single row-level lock acquisition. [#68042][#68042] -- Added support for `GENERATED {ALWAYS | BY DEFAULT} AS IDENTITY` syntax in a column definition. This will automatically create a sequence for the given column. This matches PostgreSQL syntax and functionality. For more context, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-createtable.html). [#68711][#68711] -- A scan over an index and then join on the primary index to retrieve required columns now have improved performance in the [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.2/vectorized-execution). [#67450][#67450] -- `SHOW DEFAULT PRIVILEGES FOR ALL ROLES` is now supported as a syntax. Show default privileges returns a second column `for_all_roles` (`bool`) which indicates whether or not the default [privileges](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) shown are for all roles. [#68607][#68607] -- `SHOW DEFAULT PRIVILEGES` now only shows default privileges for the current user. [#68607][#68607] -- If a user has a default [privilege](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) defined for, they cannot be dropped until the default privilege is removed. Example: `ALTER DEFAULT PRIVILEGES FOR ROLE test1 GRANT SELECT ON TABLES TO test2;`. Neither `test1` nor `test2` can be dropped until performing `ALTER DEFAULT PRIVILEGES FOR ROLE test1 REVOKE SELECT ON TABLES FROM test2;`. [#67950][#67950] -- Added new metrics to track schema job failure: `sql.schema_changer.errors.all`, `sql.schema_changer.errors.constraint_violation`, `sql.schema_changer.errors.uncategorized`; errors inside the `crdb_internal.feature_usage` table. [#68252][#68252] -- Indexes on expressions can now be created. These indexes can be used to satisfy queries that contain filters with identical expressions. For example, `SELECT * FROM t WHERE a + b = 10` can utilize an index like `CREATE INDEX i ON t ((a + b))`. [#68807][#68807] -- Roles with the name `none` and starting with `pg_` or `crdb_internal` can no longer be created. Any existing roles with these names may continue to work, but they may be broken when new features (e.g., `SET ROLE`) are introduced. [#68972][#68972] -- Implemented the geometry-based [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) `ST_Translate` on arguments `{geometry, float8,float8,float8}`. [#68959][#68959] -- Errors involving `MinTimestampBoundUnsatisfiableError` during a bounded staleness read now get a custom `pgcode` (`54C01`). [#68967][#68967] -- Table statistics are no longer collected for views. [#68997][#68997] -- Added support for `SCHEMA` comments using PostgreSQL's `COMMENT ON SCHEMA` syntax. [#68606][#68606] -- `SET ROLE user` now parses without an equal between `ROLE` and `=`. This functionality is not yet implemented. [#68750][#68750] -- Allowed `RESET ROLE` to be parsed. This is not yet implemented. [#68750][#68750] -- Added a `session_user()` [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) function, which currently returns the same thing as `current_user()`, as we do not implement `SET ROLE`. [#68749][#68749] -- Introduced new `crdb_internal.statement_statistics` virtual table that surfaces both cluster-wide in-memory statement statistics as well as persisted statement statistics. [#68715][#68715] -- The regrole `OID` alias type is now supported, which is a PostgreSQL-compatible object identifier alias that references the `pg_catalog.pg_authid` table. [#68877][#68877] -- CockroachDB now correctly sends the `RESET` tag instead of the `SET` tag when a `RESET` statement is run. [#69053][#69053] -- Bounded staleness reads now retry transactions when `nearest_only=True` and a schema change is detected which may prevent a [follower read](https://www.cockroachlabs.com/docs/v21.2/follower-reads) from being served. [#68969][#68969] -- Granting `SELECT`, `UPDATE`, `INSERT`, `DELETE` on databases is being deprecated. The syntax is still supported, but is automatically converted to the equivalent `ALTER DEFAULT PRIVILEGES FOR ALL ROLES` command. The user is given a notice that the privilege is incompatible and automatically being converted to an `ALTER DEFAULT PRIVILEGE FOR ALL ROLES` command. [#68391][#68391] -- The syntax for setting database placement is now `ALTER DATABASE db PLACEMENT ...`. (The `SET` keyword is no longer allowed before the `PLACEMENT` keyword.) [#69067][#69067] -- The `ALTER DATABASE db SET var ...` syntax is now supported. It is a syntax alias for `ALTER ROLE ALL IN DATABASE db SET var ...`, since it is identical to that functionality: it configures the default value to use for a session variable when a user connects to the given database. [#69067][#69067] -- Implemented the `crdb_internal.serialize_session()` and `crdb_internal.deserialize_session(bytes)` [built-ins](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions). The former outputs the session settings in a string that can be deserialized into another session by the latter. [#68792][#68792] -- Added a [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `schedules.backup.gc_protection_enabled` that defaults to `true` and enables chaining of GC protection across backups run as part of a schedule. [#68446][#68446] -- `crdb_internal.pb_to_json` now does not emit default values by default. [#69185][#69185] -- Implemented `pg_shdepend` with shared dependencies with tables, databases and pinned user/roles. [#68018][#68018] -- Added a [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) function `crdb_internal.datums_to_bytes`, which can encode any data type which can be used in an forward index key into bytes in an immutable way. This function is now used in the expression for [hash-sharded indexes](https://www.cockroachlabs.com/docs/v21.2/hash-sharded-indexes). [#67865][#67865] -- Introduced a new `crdb_internal.transaction_statistics` virtual table that surfaces both cluster-wide in-memory transaction statistics as well as persisted transaction statistics. [#69049][#69049] -- Introduced `SET ROLE`, which allows users with certain permissions to assume the identity of another user. It is worth noting that due to cross-version compatibility, `session_user` will always return the same as `current_user` until v22.1. Instead, use `session_user()` if you require this information. [#68973][#68973] -- An `ON UPDATE` expression can now be added to a column. Whenever a row is updated without modifying the `ON UPDATE` column, the column's `ON UPDATE` expression is re-evaluated, and the column is updated to the result. [#69091][#69091] -- [Roles](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#roles) have a default set of default [privileges](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges). For example, a role has `ALL` privileges on all objects as its default privileges when it creates the object. Additionally, the `public` role has `Usage` is a default privilege. This matches PostgreSQL's behavior such that the creator role and `public` role have the same set of default privileges in the default state. Now, when a user creates a table, sequence, type, or schema, it will automatically have `ALL` privileges on it, and `public` will have `USAGE` on types. This can be altered: `ALTER DEFAULT PRIVILEGE FOR ROLE rolea REVOKE ALL ON ... FROM rolea` will remove the default set of default privileges on the specified object from the role. [#68500][#68500] -- `SHOW DEFAULT PRIVILEGES` shows implicit privileges. Implicit privileges are "default" default [privileges](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges). For example, the creator should have all privileges on any object that it creates. This is now reflected in `SHOW DEFAULT PRIVILEGES`. [#69377][#69377] -- Added a `system.span_configurations` table. This will later be used to store authoritative span configs. [#69047][#69047] -- Added support for `GENERATED {ALWAYS | BY DEFAULT} AS IDENTITY (seq_option)` syntax under [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.2/create-table). An `IDENTITY` column is an auto-incremented column based on an automatically created sequence, with the same performance considerations as the [`CREATE SEQUENCE` syntax](https://www.cockroachlabs.com/docs/v21.2/create-sequence#considerations). The `seq_option` is consistent with the sequence option syntax in `CREATE SEQUENCE`, and we also support [`CACHE` in `seq_option`](https://www.cockroachlabs.com/docs/v21.2/create-sequence#cache-sequence-values-in-memory) for better performance. Hence, such a column can only be of `integer` type, and is implicitly `NOT NULL`. It is essentially the same as `SERIAL` with `serial_normalization=sql_sequence`, except for user access to override it. A `GENERATED ALWAYS AS IDENTITY` column cannot be overridden without specifying `OVERRIDING SYSTEM VALUE` in an `INSERT`/`UPSERT`/`UPDATE` statement. This overriding issue cannot be resolved by the `ON CONFLICT` syntax. Such a column can only be updated to `DEFAULT`. A `GENERATED BY DEFAULT AS IDENTITY` column allows being overridden without specifying any extra syntax, and users are allowed to add repeated values to such a column. It can also be updated to customized expression, but only accepts `integer` type expression result. `GENERATED {ALWAYS | BY DEFAULT} AS IDENTITY` is also supported under `ALTER TABLE ... ADD COLUMN ...` syntax. This matches the [PostgreSQL syntax](https://www.postgresql.org/docs/current/sql-createtable.html). [#69107][#69107] -- Added `transaction_fingerprint_id` to `system.statement_statistics` primary key. [#69320][#69320] -- Introduced `crdb_internal.schedule_sql_stats_compaction()` to manually create SQL Stats compaction schedule. Extended the `SHOW SCHEDULES` command to support `SHOW SCHEDULES FOR SQL STATISTICS`. [#68401][#68401] -- The [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.defaults.experimental_auto_rehoming.enabled` and session setting `experimental_enable_auto_rehoming` were added to enable auto-rehoming on [`UPDATE`](https://www.cockroachlabs.com/docs/v21.2/update) for `REGIONAL BY ROW` tables. [#69381][#69381] -- `SHOW is_superuser` now works, and is set to true if the user has `root` privileges. [#69224][#69224] -- Introduced `SET LOCAL`, which sets a session variable for the duration of the transaction. `SET LOCAL` is a no-op outside the transaction. [#69224][#69224] -- `SET LOCAL` now works for `SAVEPOINT`s. `ROLLBACK` will rollback any variables set during `SET LOCAL`. `RELEASE TO SAVEPOINT` will continue to use the variables set by `SET LOCAL` in the transaction. [#69224][#69224] -- Interleaved syntax for `CREATE TABLE`/`INDEX` is now a no-op, since support has been removed. [#69304][#69304] -- `crdb_internal.reset_sql_stats()` now resets persisted SQL Stats. [#69273][#69273] -- Changed the `plan_hash` column in both `system.statement_statistics` and `crdb_internal.statement_statistics` from `Int` to `Bytes`. [#69502][#69502] -- Added the optional `IF NOT EXISTS` clause to the `CREATE SCHEDULE` statement, making the statement idempotent. [#69152][#69152] -- `SHOW is_superuser` now works, and is set to true if the user has [`admin`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#admin-role) privileges. [#69355][#69355] -- The `set_config` [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) function now allows the `local` parameter to be true. This is the same as using `SET LOCAL`. [#69480][#69480] -- Added a new `as_json` option to [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v21.2/show-backup) which renders the backup manifest as JSON value. [#62628][#62628] -- Added a new [`EXPLAIN` flag](https://www.cockroachlabs.com/docs/v21.2/explain), `MEMO`, to be used with [`EXPLAIN (OPT)`](https://www.cockroachlabs.com/docs/v21.2/explain#opt-option). When the `MEMO` flag is passed, a representation of the optimizer memo will be printed along with the best plan. The `MEMO` flag can be used in combination with other flags such as `CATALOG` and `VERBOSE`. For example, `EXPLAIN (OPT, MEMO, VERBOSE)` will print the memo along with verbose output for the best plan. - -

Operational changes

- -- [New session variable](https://www.cockroachlabs.com/docs/v21.2/set-vars#supported-variables) `large_full_scan_rows`, as well as the corresponding cluster setting `sql.defaults.large_full_scan_rows`, are now available. This setting determines which tables are considered "large" for the purposes of enabling `disallow_full_table_scans` feature to reject full table/index scans only of "large" table. The default value for the new setting is `1000`, and in order to reject all full table/index scans (the previous behavior) one can set the new setting to `0`. Internally issued queries aren't affected, and the new setting has no impact when `disallow_full_table_scans` feature is not enabled. [#69371][#69371] -- Introduced new metric called `txn.restarts.commitdeadlineexceeded` that tracks the number of transactions that were forced to restart because their commit deadline was exceeded (`COMMIT_DEADLINE_EXCEEDED`). [#69671][#69671] -- The default value of the `storage.transaction.separated_intents.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) was changed to `true`. [#64831][#64831] -- Node-level admission control that considers the CPU resource was introduced for KV request processing, and response processing (in SQL) for KV responses. This admission control can be enabled using `admission.kv.enabled` and `admission.sql_kv_response.enabled`. [#65614][#65614] -- The new [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `bulkio.backup.merge_file_size` allows [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup) to buffer and merge smaller files to reduce the number of small individual files created by `BACKUP`. [#66856][#66856] -- Increased the timeout for range MVCC garbage collection from 1 minute to 10 minutes, to allow larger jobs to run to completion. [#65001][#65001] -- MVCC and intent garbage collection now triggers when the average intent age is 8 hours, down from 10 days. [#65001][#65001] -- Added a `server.authentication_cache.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) that defaults to `true`. When enabled, this cache stores authentication-related data and will improve the latency of authentication attempts. Keeping the cache up to date adds additional overhead when using the [`CREATE`](https://www.cockroachlabs.com/docs/v21.2/create-role), [`ALTER`](https://www.cockroachlabs.com/docs/v21.2/alter-role), and [`DROP ROLE`](https://www.cockroachlabs.com/docs/v21.2/drop-role) commands. To minimize the overhead, any bulk `ROLE` operations should be run inside of a transaction. To make the cache more effective, any regularly-scheduled `ROLE` updates should be done all together, rather than occurring throughout the day at all times. [#66919][#66919] -- Introduced a new metric `exportrequest.delay.total` to track how long `ExportRequests` (issued by [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup)) are delayed by throttling mechansisms. [#67310][#67310] -- Enabling `admission.kv.enabled` may provide better inter-tenant isolation for multi-tenant KV nodes. [#67533][#67533] -- [`debug.zip`](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-zip) files no longer contain the file `threads.txt`, which was previously used to list RocksDB background threads. [#67389][#67389] -- DistSQL response admission control can now be enabled using the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `admission.sql_sql_response.enabled`. [#67531][#67531] -- Added the `kv.bulk_sst.target_size` and `kv.bulk_sst.max_allowed_overage` [cluster settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) that control the batch size used by export requests during [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup). [#67705][#67705] -- [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) no longer dynamically reads from the `kv.bulk_ingest.batch_size` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) to determine its batch size. If the value is updated, [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) jobs need to be `PAUSE`d and `RESUME`d to adopt the updated setting. [#68105][#68105] -- The memory pool used for SQL is now also used to cover KV memory used for scans. [#66362][#66362] -- CockroachDB now records a log event and counter increment when removing an expired session. [#68476][#68476] -- Added an automatically created, on by default, emergency ballast file. This new ballast defaults to the minimum of 1% total disk capacity or 1GiB. The size of the ballast may be configured via the `--store` flag with a `ballast-size` field, accepting the same value formats as the `size` field. Also, added a new `Disk Full (10)` exit code that indicates that the node exited because disk space on at least one store is exhausted. On node start, if any store has less than half the ballast's size bytes available, the node immediately exits with the `Disk Full (10)` exit code. The operator may manually remove the configured ballast (assuming they haven't already) to allow the node to start, and they can take action to remedy the disk space exhaustion. The ballast will automatically be recreated when available disk space is 4x the ballast size, or at least 10 GiB is available after the ballast is created. [#66893][#66893] -- Added a new [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings), `sql.mutations.max_row_size.log`, which controls large row logging. Whenever a row larger than this size is written (or a single column family if multiple column families are in use) a `LargeRow` event is logged to the [`SQL_PERF`](https://www.cockroachlabs.com/docs/v21.2/logging#sql_perf) channel (or a `LargeRowInternal` event is logged to [`SQL_INTERNAL_PERF`](https://www.cockroachlabs.com/docs/v21.2/logging#sql_internal_perf) if the row was added by an internal query). This could occur for [`INSERT`](https://www.cockroachlabs.com/docs/v21.2/insert), [`UPSERT`](https://www.cockroachlabs.com/docs/v21.2/upsert), [`UPDATE`](https://www.cockroachlabs.com/docs/v21.2/update), [`CREATE TABLE AS`](https://www.cockroachlabs.com/docs/v21.2/create-table), [`CREATE INDEX`](https://www.cockroachlabs.com/docs/v21.2/create-index), [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v21.2/alter-table), [`ALTER INDEX`](https://www.cockroachlabs.com/docs/v21.2/alter-index), [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import), or [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) statements. [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/select-clause), [`DELETE`](https://www.cockroachlabs.com/docs/v21.2/delete), [`TRUNCATE`](https://www.cockroachlabs.com/docs/v21.2/truncate), and [`DROP`](https://www.cockroachlabs.com/docs/v21.2/drop-table) are not affected by this setting. [#67953][#67953] -- Added a new [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings), `sql.mutations.max_row_size.err`, which limits the size of rows written to the database (or individual column families, if multiple column families are in use). Statements trying to write a row larger than this will fail with a code `54000 (program_limit_exceeded)` error. (Internal queries writing a row larger than this will not fail, but will log a `LargeRowInternal` event to the SQL_INTERNAL_PERF channel.) This limit is enforced for INSERT, UPSERT, and UPDATE statements. [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.2/create-table) AS, CREATE INDEX, ALTER TABLE, ALTER INDEX, IMPORT, and [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) will not fail with an error, but will log LargeRowInternal events to the [`SQL_INTERNAL_PERF`](https://www.cockroachlabs.com/docs/v21.2/logging#sql_internal_perf) channel. [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/select-clause), [`DELETE`](https://www.cockroachlabs.com/docs/v21.2/delete), [`TRUNCATE`](https://www.cockroachlabs.com/docs/v21.2/truncate), and [`DROP`](https://www.cockroachlabs.com/docs/v21.2/drop-table) are not affected by this limit. Note that existing rows violating the limit **cannot** be updated, unless the update shrinks the size of the row below the limit, but *can* be selected, deleted, altered, backed-up, and restored. For this reason we recommend using the accompanying setting `sql.mutations.max_row_size.log` in conjunction with `SELECT pg_column_size()` queries to detect and fix any existing large rows before lowering `sql.mutations.max_row_size.err`. [#67953][#67953] -- The new [cluster settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `admission.l0_sub_level_count_overload_threshold` and `admission.l0_file_count_overload_threshold` can be used to tune admission control. [#69311][#69311] -- The new [cluster settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.defaults.transaction_rows_written_log`, `sql.defaults.transaction_rows_written_err`, `sql.defaults.transaction_rows_read_log`, and `sql.defaults.transaction_rows_read_err` (as well as the corresponding session variables) have been introduced. These settings determine the "size" of the transactions in written and read rows upon reaching of which the transactions are logged or rejected. The logging will go into the [`SQL_PERF`](https://www.cockroachlabs.com/docs/v21.2/logging#sql_perf) logging channel. Note that the internal queries (i.e., those issued by CockroachDB internally) cannot error out but can be logged instead into [`SQL_INTERNAL_PERF`](https://www.cockroachlabs.com/docs/v21.2/logging#sql_internal_perf) logging channel. The "written" limits apply to [`INSERT`](https://www.cockroachlabs.com/docs/v21.2/insert), [`INSERT INTO SELECT FROM`](https://www.cockroachlabs.com/docs/v21.2/insert), [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v21.2/insert), [`UPSERT`](https://www.cockroachlabs.com/docs/v21.2/upsert), [`UPDATE`](https://www.cockroachlabs.com/docs/v21.2/update), and [`DELETE`](https://www.cockroachlabs.com/docs/v21.2/delete) whereas the "read" limits apply to [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/select-clause) statement in addition to all of these. These limits will not apply to [`CREATE TABLE AS`](https://www.cockroachlabs.com/docs/v21.2/create-table), [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/select-clause), [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import), [`TRUNCATE`](https://www.cockroachlabs.com/docs/v21.2/truncate), [`DROP`](https://www.cockroachlabs.com/docs/v21.2/drop-table), [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v21.2/alter-table), [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup), [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore), or [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v21.2/create-statistics) statements. Note that enabling `transaction_rows_read_err` guardrail comes at the cost of disabling the usage of the auto commit optimization for the mutation statements in implicit transactions. [#69202][#69202] -- The `cockroach debug tsdump` command now downloads histogram timeseries it silently omitted previously. [#69469][#69469] -- New variables `sql.mutations.max_row_size.{log|err}` were renamed to `sql.guardrails.max_row_size_{log|err}` for consistency with other variables and metrics. [#69457][#69457] -- Improved range feed observability by adding a `crdb_internal.active_range_feeds` virtual table which lists all currently executing range feeds on the node. [#69055][#69055] -- Upgrading to the next version will be blocked if interleaved tables/indexes exist. Users should convert existing interleaved tables/indexes to non-interleaved ones or drop any interleaved tables/indexes before upgrading to the next version. [#68074][#68074] -- Added support for the DataDog tracer. Set the `trace.datadog.agent` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) to enable it if you have got the DataDog collector service running. [#61602][#61602] -- Upgraded the Lightstep tracer version, resulting in better observability for Lightstep users. [#61593][#61593] -- Added four new metrics, `sql.guardrails.max_row_size_{log|err}.count{.internal}`, which are incremented whenever a large row violates the corresponding `sql.guardrails.max_row_size_{log|err}` limit. [#69457][#69457] - -

Command-line changes

- -- Added `load show` args to display subset of backup metadata. Users can display subsets of metadata of a single manifest by using `load show files/descriptors/metadata/spans `. [#61131][#61131] -- Updated `load show` with `summary` subcommand to display meta information of an individual backup. Users are now able to inspect backup metadata, files, spans, and descriptors with the CLI command `cockroach load show summary ` without a running cluster. [#61131][#61131] -- Updated `load show` with `incremental` subcommand to display incremental backups. Users can list incremental backup paths of a full backup by running `cockroach load show incremental .` [#61862][#61862] -- Added `load show backups` to display backup collection. Previously, users could list backups created by [`BACKUP INTO`](https://www.cockroachlabs.com/docs/v21.2/backup) via [`SHOW BACKUP IN`](https://www.cockroachlabs.com/docs/v21.2/show-backup) in a SQL session. But this listing task can be also done offline without a running cluster. Now, users are able to list backups in a collection with `cockroach load show backups `. [#61862][#61862] -- The command `cockroach sysbench` has been removed. Users who depend on this command can use a copy of a `cockroachdb` executable binary from a previous version. [#62305][#62305] -- The number of connection retries and connection timeouts for configurations generated by `cockroach gen haproxy` have been tweaked. [#62308][#62308] -- Extended `load show` with `load show data` subcommand to display backup table data. By running `cockroach load show data `, users are able to inspect data of a table in backup. Also, added `--as-of` flag to `load show data` command. Users are able to show backup snapshot data at a specified timestamp by running `cockroach load show data
--as-of='-1s'` [#62662][#62662] -- Updated `load show data` command to display backup data in CSV format. Users can either pipe the output to a file or specify the destination by a `--destination` flag. [#62662][#62662] -- Previously, `load show summary` would output results in an unstructured way, which made it harder to filter information. `load show summary` now outputs the information in JSON format, which is easier to handle and can be filtered through another command-line JSON processor. [#63100][#63100] -- Previously, `--as-of` of `load show data` had the restriction that users could only inspect data at an exact backup timestamp. The flag has improved to work with [backups with revision history](https://www.cockroachlabs.com/docs/v21.2/take-backups-with-revision-history-and-restore-from-a-point-in-time) so that users can inspect data at an arbitrary timestamp. [#63181][#63181] -- Certain errors caused by invalid command-line arguments are now printed on the process' standard error stream, instead of standard output. [#63839][#63839] -- The `cockroach gen autocomplete` command has been updated and can now produce autocompletion definitions for the `fish` shell. [#63839][#63839] -- Previously, backup inspection was done via `cockroach load show ..`, which could confuse users with ambiguous verbs in the command chain. The syntax is now more clear and indicative for users debugging backups. The changes are: `load show summary ` -> `debug backup show `; `load show incremental ` -> `debug backup list-incremental `; `load show backups ` -> `debug backup list-backups `; `load show data ` -> `debug backup export --table=`. [#63309][#63309] -- Previously, `\demo shutdown ` would error if `--global` was set. This will now error gracefully as an unsupported behavior. [#62435][#62435] -- The `--global` flag for [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) is now advertised. This flag simulates latencies in multi-node demo clusters when the nodes are set in different regions to simulate real-life global latencies. [#62435][#62435] -- There will now be a message upon start-up on [`cockroach demo --global`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) indicating that latencies between nodes will simulate real-world latencies. [#62435][#62435] -- Added `--max-rows` and `--start-key` of `cockroach backup debug` tool for users to specify on export row number and start key when inspecting data from backup. [#64157][#64157] -- Added `--with-revisions` on `debug export` to allow users to export revisions of table data. If `--with-revisions` is specified, revisions of data are returned to users, with an extra column displaying the revision time of that record. This is an experimenal/beta feature of the `cockroach backup debug` tool to allow users to export revisions of data from backup. [#64285][#64285] -- Renamed `connect` to `connect init`, and added `connect join` command to retrieve certificates from an existing secure cluster and setup a new node to connect with it. [#63492][#63492] -- The `cockroach debug keys` command recognizes a new flag `--type` that constrains types of displayed entries. This enables more efficient introspection of storage in certain troubleshooting scenarios. [#64879][#64879] -- Server health metrics are now a structured event sent to the [`HEALTH`](https://www.cockroachlabs.com/docs/v21.2/logging#health) logging channel. For details about the event payload, refer to the [reference documentation](https://www.cockroachlabs.com/docs/v21.2/eventlog#runtime_stats). [#65024][#65024] -- Server health metrics are now optimized for machine readability by being sent as a structured event to the [`HEALTH`](https://www.cockroachlabs.com/docs/v21.2/logging#health) logging channel. For details about the event payload, refer to the [reference documentation](https://www.cockroachlabs.com/docs/v21.2/eventlog#runtime_stats). [#65024][#65024] -- The `cockroach debug pebble` tool can now be used with encrypted stores. [#64908][#64908] -- Added a `cockroach debug job-trace` command that takes 2 arguments: `` and file destination, along with a `--url` pointing to the node on which to execute this command against. The command pulls information about inflight trace spans associated with the job and dumps it to the file destination. [#65324][#65324] -- The new subcommand `cockroach convert-url` converts a connection URL, such as those printed out by [`cockroach start`](https://www.cockroachlabs.com/docs/v21.2/cockroach-start) or included in the online documentation, to the syntax recognized by various client drivers. For example: - - ~~~ - $ ./cockroach convert-url --url "postgres://foo/bar" - Connection URL for libpq (C/C++), psycopg (Python), lib/pq & pgx (Go), - node-postgres (JS) and most pq-compatible drivers: - postgresql://root@foo:26257/bar - Connection DSN (Data Source Name) for Postgres drivers that accept - DSNs - most drivers and also ODBC: - database=bar user=root host=foo port=26257 - Connection URL for JDBC (Java and JVM-based languages): - jdbc:postgresql://foo:26257/bar?user=root - ~~~ - [#65460][#65460] -- The URLs spelled out by [`cockroach start`](https://www.cockroachlabs.com/docs/v21.2/cockroach-start), [`cockroach start-single-node`](https://www.cockroachlabs.com/docs/v21.2/cockroach-start-single-node), and [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) in various outputs now always contain a target database for the connection; for example, `defaultdb` for regular servers. Certain drivers previously automatically filled in the name "postgres" if the database name field was empty. [#65460][#65460] -- The connection URLs spelled out by [`cockroach start`](https://www.cockroachlabs.com/docs/v21.2/cockroach-start), [`cockroach start-single-node`](https://www.cockroachlabs.com/docs/v21.2/cockroach-start-single-node), and [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) in various outputs now include a variant suitable for use with JDBC client apps. [#65460][#65460] -- [`cockroach sql`](https://www.cockroachlabs.com/docs/v21.2/cockroach-sql) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) now support the client-side parameter `border` like `psql`. [#66253][#66253] -- Added support for [`cockroach debug ballast`](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-ballast) on Windows. [#66793][#66793] -- The [`cockroach import pgdump`](https://www.cockroachlabs.com/docs/v21.2/cockroach-import) command now recognizes custom target database names inside the URL passed via `--url`. Additionally, the command now also accepts a `--database` parameter. Using this parameter is equivalent to customizing the database inside the `--url` flag. [#66375][#66375] -- `cockroach debug job-trace` now creates a `job-trace.zip` which contains trace information for each node executing the job. [#66914][#66914] -- Added the `--recursive` or `-r` flag to the [`cockroach userfile upload`](https://www.cockroachlabs.com/docs/v21.2/cockroach-userfile-upload) CLI command allowing users to upload the entire subtree rooted at a specified directory to user-scoped file storage: `userfile upload -r path/to/source/dir destination`. The destination can be expressed one of four ways:
  • Empty (not specified)
  • A relative path, such as `path/to/dir`
  • A well-formed URI with no host, such as `userfile:///path/to/dir/`
  • A full well-formed URI, such as `userfile://db.schema.tablename_prefix/path/to/dir`
If a destination is not specified, the default URI scheme and host will be used, and the basename from the source will be used as the destination directory. For example: `userfile://defaultdb.public.userfiles_root/yourdirectory`. If the destination is a relative path such as `path/to/dir`, the default userfile URI schema and host will be used (`userfile://defaultdb.public.userfiles_$user/`), and the relative path will be appended to it. For example: `userfile://defaultdb.public.userfiles_root/path/to/dir`. If the destination is a well-formed URI with no host, such as `userfile:///path/to/dir/`, the default userfile URI schema and host will be used (`userfile://defaultdb.public.userfiles_$user/`). For example: `userfile://defaultdb.public.userfiles_root/path/to/dir`. If the destination is a full well-formed URI, such as `userfile://db.schema.tablename_prefix/path/to/dir`, then it will be used verbatim. For example: `userfile://foo.bar.baz_root/path/to/dir`. [#65307][#65307] -- Previously, the `crdb-v2` log file format lacked a parser. This has now changed. [#65633][#65633] -- The [`cockroach debug merge-logs`](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-merge-logs) command now renders in color by default. [#66629][#66629] -- CockroachDB now supports a new [logging channel](https://www.cockroachlabs.com/docs/v21.2/logging) called `TELEMETRY`. This will be used in later versions to report diagnostic events useful to Cockroach Labs for product analytics. (At the time of this writing, no events are defined for the `TELEMETRY` channel yet.) When no logging configuration is specified, this channel is connected to file output, with a maximum retention of 1MiB. To also produce the diagnostic output elsewhere, one can [define a new sink](https://www.cockroachlabs.com/docs/v21.2/configure-logs) that captures this channel. For example, to see diagnostics reports on the standard error, one can use: `--log='sinks: {stderr: {channels: TELEMETRY, filter: INFO}}'` When configuring file output, the operator should be careful to apply a separate maximum retention for the `TELEMETRY` channel from other file outputs, as telemetry data can be verbose and outcrowd other logging messages. For example: `--log='sinks: {file-groups: {telemetry: {channels: TELEMETRY, max-group-size: 1MB}, ...}}`. [#66427][#66427] -- Added the `cockroach debug statement-bundle recreate ` command, which allows users to load a statement bundle into an in-memory database for inspection. [#67979][#67979] -- CockroachDB server nodes now report more environment variables in logs upon startup. Only certain environment variables that may have an influence on the server's behavior are reported. [#66842][#66842] -- [`cockroach sql`](https://www.cockroachlabs.com/docs/v21.2/cockroach-sql) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) now support the `\c` / `\connect` client-side command, in a way similar to `psql`:
  • `\c` without arguments: display the current connection parameters.
  • `\c [DB] [USER] [HOST] [PORT]` connect using the specified parameters. Specify '-' to omit one parameter.
  • `\c URL` connect using the specified URL. For example: `\c - myuser` to reconnect to the same server/db as `myuser`.
This feature is intended to ease switching across simulated nodes in `cockroach demo`. Note: `\c ` reuses the existing server connection to change the current database, using a `SET` statement. To force a network reconnect, use `\c -` then `\c `, or use `\c -`. Note: When using the syntax with discrete parameters, the generated URL reuses the same TLS parameters as the original connection, including the CA certificate used to validate the server. To use different TLS settings, use `\c ` instead. [#66258][#66258] -- Added a new `HTTP` sink to the logging system. This can be [configured similarly to other log sinks](https://www.cockroachlabs.com/docs/v21.2/configure-logs) with the new `http-servers` and `http-defaults` sections of the logging config passed via the `--log` or `--log-config-file` command-line flags. [#66196][#66196] -- The `\c` client-side command in [`cockroach sql`](https://www.cockroachlabs.com/docs/v21.2/cockroach-sql) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) now always reconnects to the server even when only changing the current database. (This negates a part of a previous release note.) [#68326][#68326] -- [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) now recognizes the command-line flag `--listening-url-file` like [`cockroach start`](https://www.cockroachlabs.com/docs/v21.2/cockroach-start) and [`cockroach start-single-node`](https://www.cockroachlabs.com/docs/v21.2/cockroach-start-single-node). When specified, the `demo` utility will write a valid connection URL to that file after the test cluster has been initialized. This facility also makes it possible to automatically wait until the demo cluster has been initialized in automation; for example, by passing the name of a unix named `FIFO` via the new flag. [#68706][#68706] -- `cockroach mt start-sql` now supports `--advertise-addr` in the same fashion as [`cockroach start`](https://www.cockroachlabs.com/docs/v21.2/cockroach-start). [#69113][#69113] -- `cockroach debug decode-proto` now does not emit default values by default. [#69185][#69185] -- The `cockroach debug tsdump` command now accepts `--from` and `--to` flags that limit for which dates timeseries are exported. [#69491][#69491] -- Log file read and write permissions may now be set via the new `file-permissions` key in the `--log` flag or `--log-config-file` file. [#69243][#69243] -- Updated the output of `--locality` and `--locality-addr` flags to use terms that match cloud provider names for things such as 'region' and 'zone'. [#62381][#62381] - -

API endpoint changes

- -- A list of node IDs representing the nodes that store data for the database has been added to the stats field in the database details endpoint under `nodeIds`. Database details must be requested with `include_stats` set to `true`, e.g., `/_admin/v1/databases/{database}?include_stats=true`. Similarly, `nodeIds` has also been added to the table stats endpoint, which is an ordered list of node ids that stores the table data: `/_admin/v1/databases/{database}/tables/{table}/stats` [#69788][#69788] -- The `changefeed.poll_request_nanos` metric is no longer reported by the node status API, the `crdb_internal.metrics` table, or the [Prometheus endpoint](https://www.cockroachlabs.com/docs/v21.2/monitoring-and-alerting#prometheus-endpoint). [#63935][#63935] -- The transaction abort error reason `ABORT_REASON_ALREADY_COMMITTED_OR_ROLLED_BACK_POSSIBLE_REPLAY` has been renamed to `ABORT_REASON_RECORD_ALREADY_WRITTEN_POSSIBLE_REPLAY`. [#67215][#67215] -- A Stats message was added to the `admin` `DatabaseDetails` response, providing `RangeCount` and `ApproximateDiskBytes` in support of upcoming UI changes to the DB console. [#67986][#67986] -- Tenant pods now expose the Statements API at `/_status/statements` on their HTTP port. [#66675][#66675] -- Tenant pods now expose the ListSessions API at `/_status/sessions` on their HTTP port. [#69376][#69376] -- Added a new endpoint `/_status/combinedstmts` to retrieve persisted and in-memory statements from `crdb_internal.statement_statistics` and `crdb_internal.transaction_statistics` by aggregated_ts range. The request supports optional query string parameters `start` and `end`, which are the date range in unix time. The response returned is currently the response expected from `/_status/statements`. `/_status/statements` has also been updated to support the parameters `combined`, `start`, and `end`. If `combined` is `true`, then the statements endpoint will use `/_status/combinedstmts` with the optional parameters `start` and `end`. [#69238][#69238] - -

DB Console changes

- -- A new column on the [Database Page](https://www.cockroachlabs.com/docs/v21.2/ui-databases-page) now shows the node and region information for each database. The [Tables view](https://www.cockroachlabs.com/docs/v21.2/ui-databases-page#tables-view) now displays a summary section of the nodes and regions where the table data is stored. The new table columns and region/node sections are only displayed if there is more than one node. [#69804][#69804] -- Fixed duplicates of statements on the [Transaction Details Page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page#transaction-details-page) for multi-node clusters. [#61771][#61771] -- Changed copy that previously referred to the app as "Admin UI" to "DB Console" instead. [#62452][#62452] -- Users can now reset SQL stats from the [DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-overview). [#63342][#63342] -- Created new routes for statements. A database name can be passed on statements routes, so only statements executed on that particular database are displayed. Added a new function returning all databases that had at least one statement executed during the current statistics collection period. [#64087][#64087] -- The lease history section on the range report [debug page](https://www.cockroachlabs.com/docs/v21.2/ui-debug-pages) now shows the type of lease acquisition event that resulted in a given lease. [#63822][#63822] -- Updated the DB Console to show information about the database on the [Statements Page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and ability to choose which columns to display. [#64614][#64614] -- The DB Console now shows information about the region of a node on the [Transactions Page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page). [#64996][#64996] -- Added missing formatting for some event types displayed in the DB Console. [#65717][#65717] -- Changed the default event formatting to appear less alarming to users. [#65717][#65717] -- The [Statement Details Page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-details-page) now displays information about nodes and regions a statement was executed on. [#65126][#65126] -- The [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages now display information about nodes and regions a statement was executed on. [#65126][#65126] -- Changed time format on metrics events to 24-hour UTC time. [#66277][#66277] -- Removed styling width calculation from multiple bars. [#66734][#66734] -- Added a new chart showing the latency of establishing a new SQL connection, including the time spent on authentication. [#66625][#66625] -- The KV transaction restarts chart was moved from the Distributed metrics to the [SQL Dashboard](https://www.cockroachlabs.com/docs/v21.2/ui-sql-dashboard) to be close to the Open SQL Transactions chart for more prominent visibility. [#66973][#66973] -- Fixed mislabelled tooltips on the [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) and [Transaction Details](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page#transaction-details-page) pages in DB console. [#66605][#66605] -- The DB Console now uses dotted underline on text that contains a tooltip. The 'i' icon was removed. [#67023][#67023] -- Added `""` to whitespace application names on filter selection on the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages. [#66967][#66967] -- Added a new Overload dashboard that groups metrics that are useful for admission control. [#66595][#66595] -- The SQL Statement Contention Time chart is surfaced more prominently on the [SQL Dashboard](https://www.cockroachlabs.com/docs/v21.2/ui-sql-dashboard). [#66969][#66969] -- Added a Full Table/Index Scans time series chart on the [SQL Dashboard](https://www.cockroachlabs.com/docs/v21.2/ui-sql-dashboard). [#66972][#66972] -- The 'threads' debugging page, previously used to inspect RocksDB threads, has been removed. [#67389][#67389] -- Fixed a color mismatch ont he node status badge on the [Cluster Overview](https://www.cockroachlabs.com/docs/v21.2/ui-cluster-overview-page) page. [#68049][#68049] -- Added a CES survey link component to support being able to get client feedback. [#68429][#68429] -- The [Transaction Details Page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page#transaction-details-page)'s SQL box now shows all of a transaction's statements in the order that they were executed. The Transaction Details Page also no longer displays statement statistics. [#68447][#68447] -- Updated the [Databases Page](https://www.cockroachlabs.com/docs/v21.2/ui-databases-page) in the DB Console to bring them into alignment with our modern UX. [#68390][#68390] -- The "Logical Plan" tab in the DB Console has been renamed "Explain Plan", and the displayed plan format has been updated to match the output of the [`EXPLAIN`](https://www.cockroachlabs.com/docs/v21.2/explain) command in the SQL shell. Global `EXPLAIN` properties have been added to the logical plan in the DB Console which were previously missing. The `EXPLAIN` format shown below should now be reflected in the DB Console: - - ~~~ - distribution: full - vectorized: true - • hash join - │ estimated row count: 503 - │ equality: (rider_id) = (id) - │ - ├── • scan - │ estimated row count: 513 (100% of the table; stats collected 9 seconds ago) - │ table: rides@primary - │ spans: FULL SCAN - │ - └── • scan - estimated row count: 50 (100% of the table; stats collected 1 minute ago) - table: users@primary - spans: FULL SCAN - ~~~ - [#68566][#68566] -- Changed date times on the [Jobs Page](https://www.cockroachlabs.com/docs/v21.2/ui-jobs-page) to use 24-hour UTC. [#68916][#68916] -- Added admission control metrics to the Overload dashboard. [#68595][#68595] -- Hid node and region information on the new tenant plan (serverless/free tier). [#69444][#69444] -- Added a new date range selector component to the DB Console's [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages with the ability to show historical data. The default date range is set to 1 hour ago, and is used as the value when users reset the date range. [#68831][#68831] -- Fixed tooltip text on the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages to use the correct setting `diagnostics.sql_stat_reset.interval` instead of the previous value, `diagnostics.reporting.interval`. [#69577][#69577] - - -

Bug fixes

- -- Fixed a bug where [cluster backups with revision history](https://www.cockroachlabs.com/docs/v21.2/take-backups-with-revision-history-and-restore-from-a-point-in-time#create-a-backup-with-revision-history) may have included dropped descriptors in the "current" snapshot of descriptors on the cluster. [#68983][#68983] -- Users can now only scroll in the content section of the [Transactions Page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page), [Statements Page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page), and [Sessions Page](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page). [#69620][#69620] -- Previously, when using [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v21.2/alter-primary-key) on a [regional by row](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview#regional-by-row-tables) table, the copied unique index from the old primary key would not have the correct zone configurations applied. This is now resolved, but users who encountered this bug should re-create the index. [#69681][#69681] -- Fixed a bug that caused incorrect evaluation of the [`IN` operator](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#supported-operations) when the tuple on the right-hand side of the operator included a subquery, like `a IN ('foo', (SELECT s FROM t), 'bar')`. [#69651][#69651] -- Fixed a bug where previously an internal error or a crash could occur when some [`crdb_internal` built-in functions](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators) took string-like type arguments (e.g., `name`). [#69698][#69698] -- Previously, users would receive a panic message when the log parser failed to extract [log file formats](https://www.cockroachlabs.com/docs/v21.2/log-formats). This has been replaced with a helpful error message. [#69018][#69018] -- Fixed a bug to ensure that auxiliary tables used during [cluster restore](https://www.cockroachlabs.com/docs/v21.2/restore) are garbage collected quickly afterwards. [#67936][#67936] -- [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) will now correctly ignore dropped databases that may have been included in [cluster backups with revision history](https://www.cockroachlabs.com/docs/v21.2/take-backups-with-revision-history-and-restore-from-a-point-in-time#create-a-backup-with-revision-history). [#68551][#68551] -- Fixed a bug that can cause prolonged unavailability due to lease transfer to a replica that may be in need of a [Raft snapshot](https://www.cockroachlabs.com/docs/v21.2/architecture/replication-layer#snapshots). [#69696][#69696] -- Fixed a bug where resuming an active schedule would always reset its next run time. This was sometimes undesirable with schedules that had a [`first_run` option](https://www.cockroachlabs.com/docs/v21.2/create-schedule-for-backup#schedule-options) specified. [#69571][#69571] -- Fixed a regression in statistics estimation in the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) for very large tables. The bug, which has been present since v20.2.14 and v21.1.7, could cause the optimizer to severely underestimate the number of rows returned by an expression. [#69711][#69711] -- The [`raft.commandsapplied`](https://www.cockroachlabs.com/docs/v21.2/ui-custom-chart-debug-page#available-metrics) metric is now populated again. [#69857][#69857] -- Fixed a bug where previously the store rebalancer was unable to rebalance leases for hot ranges that received a disproportionate amount of traffic relative to the rest of the cluster. This often led to prolonged single node hotspots in certain workloads that led to hot ranges. [#65379][#65379] -- Added protection to [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) to guard against concurrent type changes on [user-defined types](https://www.cockroachlabs.com/docs/v21.2/enum) referenced by the target table. [#69674][#69674] -- DNS unavailability during range 1 leaseholder loss will no longer cause significant latency increases for queries and other operations. [b6fb0e626][b6fb0e626] -- Previously, using [`SET`](https://www.cockroachlabs.com/docs/v21.2/set-vars) in a transaction and having the transaction retry internally could result in the previous session variable being unused. For example: - - ~~~sql - SET intervalstyle = 'postgres'; - BEGIN; - do something with interval -- (1) SET intervalstyle = 'iso_8601'; - do something with interval -- (2) COMMIT; - ~~~ - - If the transaction retries at `COMMIT`, when attempting to re-run (1), we would have an interval style `'iso_8601'` instead of the original `'postgres'` value. This has now been resolved. [#69554][#69554] - -- Envelope schema in Avro registry now honors `schema_prefix` and `full_table_name`. [#60946][#60946] -- Previously, a drop column would cause check constraints that are currently validating to become active on a table. This has been fixed. [#62257][#62257] -- OpenTracing traces now work correctly across nodes. The client and server spans for RPCs are once again part of the same trace instead of the server erroneously being a root span. [#62703][#62703] -- Fixed a bug which prevented `cockroach debug doctor zipdir` from validating foreign key information represented in the un-upgraded deprecated format. [#62829][#62829] -- Schema changes that include both a column addition and primary key change in the same transaction no longer result in a failed changefeed. [#63217][#63217] -- Fixed a bug whereby transient clock synchronization errors could result in permanent schema change failures. [#63671][#63671] -- Fixed a bug of `debug backup export` caused by inspecting table with multiple ranges. [#63678][#63678] -- Fixed a performance regression for very simple queries. [#64225][#64225] -- Fixed a bug that prevented transactions which lasted longer than 5 minutes and then performed writes from committing. [#63725][#63725] -- Added a fix to prevent a rare crash that could occur when reading data from interleaved tables. [#64374][#64374] -- Fixed an "index out of range" internal error with certain simple queries. [#65018][#65018] -- Hosts listed with the `connect --join` command-line flag now default to port `26257` (was `443`). This matches the existing behavior of `start --join`. [#65014][#65014] -- CockroachDB now shows a correct error message if it tries to parse an interval that is out of range. [#65377][#65377] -- Fixed a bug where `NaN` coordinates could make `ShortestLine`/`LongestLine` panic. [#65445][#65445] -- Fixed a bug whereby using an `enum` value as a placeholder in an `AS OF SYSTEM TIME` query preceding a recent change to that `enum` could result in a fatal error. [#65620][#65620] -- Fixed a race condition where transaction cleanup would fail to take into account ongoing writes and clean up their intents. [#65592][#65592] -- The [Transactions Page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) now shows the correct value for implicit transactions. [#65126][#65126] -- Fixed a bug where `TIMETZ` values would not display appopriately in the CLI. [#65321][#65321] -- Fixed a bug which prevented adding self-referencing `FOREIGN KEY` constraints in the `NOT VALID` state. [#65871][#65871] -- The `cockroach mt start-sql` command with a nonexistent tenant ID now returns an error. Previously, it would crash and poison the tenant ID for future usage. [#65683][#65683] -- CockroachDB now correctly handles errors during the `pgwire` extended protocol. Specifically, when an error is detected while processing any extended protocol message, an `ErrorResponse` is returned; then the server reads and discards messages until a `Sync` command is received from the client. This matches the [PostgreSQL behavior](https://www.postgresql.org/docs/13/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY). [#57590][#57590] -- Fixed a bug where owners of a table have privileges to `SELECT` from it, but would return false on `has_*_privilege`-related functions. [#65766][#65766] -- Added a more accurate error message for restoring AOST before GC TTL. [#66025][#66025] -- When a non-SQL CLI command (e.g., [`cockroach init`](https://www.cockroachlabs.com/docs/v21.2/cockroach-init)) was invoked with the `--url` flag and the URL did not contain a `sslmode` parameter, the command was incorrecting defaulting to operate as if `--insecure` was specified. This has been corrected. [#65460][#65460] -- The URLs printed out by the client-side command `\demo ls` in [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) now properly include the workload database name, if any was created. [#65460][#65460] -- Fixed a bug where a superfluous unique constraint would be added to a table during a primary key change when the primary key change specified a new primary key constraint that involved the same columns in the same directions. [#66225][#66225] -- Properly populated `crdb_internal.job`'s `coordinator_id` field, which had not been properly populated since v20.2. [#61417][#61417] -- Fixed a bug in [`SHOW ZONE CONFIGURATIONS`](https://www.cockroachlabs.com/docs/v21.2/show-zone-configurations) where long constraints fields may show `\n` characters. [#69470][#69470] -- CockroachDB could previously error out when a query involving tuples with collated strings and NULLs was executed in a distributed manner. This is now fixed. [#66337][#66337] -- Fixed a bug with PostgreSQL compatibility where dividing an interval by a number would round to the nearest Microsecond instead of always rounding down. [#66345][#66345] -- Previously, rows treated as tuples in functions such as `row_to_json` may have had their keys normalized to lowercase, instead of being preserved in the original casing as per PostgreSQL. This is now fixed. [#66535][#66535] -- CockroachDB could previously crash when executing `EXPLAIN (VEC)` on some mutations. This is now fixed. [#66569][#66569] -- Fixed a bug that caused a panic for window functions operating in `GROUPS` mode with `OFFSET PRECEDING` start and end bounds. [#66582][#66582] -- Fixed a bug on the chart catalog admin API. [#66645][#66645] -- Fixed a deadlock during `adminVerifyProtectedTimestamp`. [#66760][#66760] -- Fixed a bug where a substring of a linestring with the same points would return `EMPTY` instead of a `POINT` with the repeated point. [#66738][#66738] -- Fixed a bug where it was possible for a linestring returned from `ST_LineSubString` to have repeated points. [#66738][#66738] -- Fixed an error occurring when executing `OPERATOR(pg_catalog.~)`. [#66865][#66865] -- Fixed `ST_LineSubstring` for `LINESTRING EMPTY` panicking instead of returning `null`. [#66936][#66936] -- A migration will be run when users upgrade to v21.2 (specifically v21.2.14). This migration fixes any privilege descriptors that were corrupted from the fallout of the `ZONECONFIG`/`USAGE` bug on tables and databases after upgrading from v20.1 to v20.2 ([#65010](https://github.com/cockroachdb/cockroach/pull/65010)) and those that were corrupted after converting a database to a schema ([#65697](https://github.com/cockroachdb/cockroach/issues/65697)). [#66495][#66495] -- Fixed a bug where job leases might be revoked due to a transient network error. [#67075][#67075] -- Fixed a case where [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import) would panic when parsing geospacial schemas with spacial index tuning parameters. In particular, this bug could be triggered by specifying the `fillfactor` option, or setting the `autovacuum_enabled` option to `false`. [#66899][#66899] -- Intent garbage collection no longer waits for or aborts running transactions. [#65001][#65001] -- The [Statements Page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) now properly displays the Statement Time label on the column selector. [#67327][#67327] -- Avro feeds now support special decimals like Infinity [#66870][#66870] -- Fixed a typo on the Network tooltip on the [Statements Page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page). [#65126][#65126] -- PostgreSQL-style intervals now print a `+` sign for day units if the year/month unit preceding was negative (e.g., `-1 year -2 months 2 days` will now print as `-1 year -2 months +2 days`). [#67210][#67210] -- SQL Standard intervals will omit the day value if the day value is `0`. [#67210][#67210] -- Added partial redactability to log SQL statements. This change provides greater visibility to SQL usage in the logs, enabling greater ability to troubleshoot. [#66359][#66359] -- Fixed a bug in jobs where failures to write to the jobs table could prevent subsequent adoption of a job until the previous node dies or the job is paused. [#67671][#67671] -- Fixed a minor resource leak that occurs when a [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) is run. [#67478][#67478] -- Previously, an unavailable node that started draining or [decommissioning](https://www.cockroachlabs.com/docs/v21.2/node-shutdown?filters=decommission) would be treated as live and thus could receive a lease transfer, leading to the range becoming unavailable. This has been fixed. [#67319][#67319] -- [`INSERT`](https://www.cockroachlabs.com/docs/v21.2/insert) and [`UPDATE`](https://www.cockroachlabs.com/docs/v21.2/update) statements which operate on larger rows are split into batches using the sql.mutations.mutation_batch_byte_size setting [#67537][#67537] -- Fixed a bug that could cause the `min` window function to incorrectly return `null` when executed with a non-default `EXCLUDE` clause, because `min` and `max` did not ignore `null` input values. [#68025][#68025] -- Fixed a bug that could cause a panic when a window function was executed in `RANGE` mode with `OFFSET PRECEDING` or `OFFSET FOLLOWING` on a datetime column. [#68013][#68013] -- `SHOW` search_path will now properly quote `$user`. [#68034][#68034] -- Fixed a bug where restores of data with multiple column families could be split illegally (within a single SQL row). This could result in temporary data unavailability until the ranges on either side of the invalid split were merged. [#67497][#67497] -- Fixed a bug that was introduced in v21.1.5, which prevented nodes from decommissioning in a cluster if there were multiple nodes intermittently missing their liveness heartbeats. [#67714][#67714] -- Fixed a bug where the `schedules.backup.succeeded` and `schedules.backup.failed` metrics would sometimes not be updated. [#67855][#67855] -- Previously, a `SHOW GRANTS ON TYPE db.public.typ` command would not correctly show the grants if the current database was not `db`. This is now fixed. [#68137][#68137] -- Fixed a bug which permitted the dropping of `enum` values which were in use in index predicates or partitioning values. [#68257][#68257] -- Fixed a bug that could cause the `min` and `max` window functions to return incorrect results when the window frame for a row was smaller than the frame for the previous row. [#68314][#68314] -- Previously, parsing a date from a string would incorrectly assuming YMD format instead of the MDY format if the date was formatted using the two digit format for year: "YY-MM-DD" instead of "MM-DD-YY". This has been resolved. However, if you relied on having two-digit years as YY-MM-DD, prepend `0`s at the front until it is at least 3 digits; e.g., "15-10-15" for the year 15 should read "015-10-15". [#68093][#68093] -- Fixed a bug where migration jobs might run and update a cluster version before the cluster was ready for the upgrade. The bug could result in many extra failed migration jobs. [#67281][#67281] -- `IMPORT PGDUMP` with a `UDT` would result in a nil pointer exception. It now fails gracefully. [#67994][#67994] -- Cascaded drop of views could run into '`table ...is already being dropped`' errors incorrectly. This is now fixed. [#68601][#68601] -- Fixed a bug in which an [`ENUM`](https://www.cockroachlabs.com/docs/v21.2/enum)-type value could be dropped despite it being referenced in a table's `CHECK` expression. [#68666][#68666] -- Fixed an oversight in the data generator for TPC-H which was causing a smaller number of distinct values to be generated for `p_type` and `p_container` in the part table than the spec called for. [#68699][#68699] -- Fixed a bug that created non-partial unique constraints when a user attempted to create a partial unique constraint in [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v21.2/alter-table) statements. [#68629][#68629] -- Fixed a bug where encryption-at-rest registry would accumulate nonexistent file entries forever, contributing to the filesystem operation's latency on the store. [#68394][#68394] -- Fixed a bug where [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import) would incorrectly reset its progress upon resumption. [#68337][#68337] -- Previously, given a table with [hash-sharded indexes](https://www.cockroachlabs.com/docs/v21.2/hash-sharded-indexes), the output of `SHOW CREATE TABLE` was not round-trippable: executing the output would not create an identical table. This has been fixed by showing `CHECK` constraints that are automatically created for these indexes in the output of `SHOW CREATE TABLE`. The bug had existed since v21.1. [#69001][#69001] -- Importing tables via `IMPORT PGDUMP` or `IMPORT MYSQL` should now honor [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.defaults.default_int_size` and session variable `default_int_size`. [#68902][#68902] -- Fixed a bug with cardinality estimation in the optimizer that was introduced in v21.1.0. This bug could cause inaccurate row count estimates in queries involving tables with a large number of null values. As a result, it was possible that the optimizer could choose a suboptimal plan. This issue has now been fixed. [#69070][#69070] -- Fixed internal or "invalid cast" errors in some cases involving cascading updates. [#69126][#69126] -- Previously, CockroachDB could return an internal error when performing the streaming aggregation in some edge cases, and this is now fixed. The bug had been present since v21.1. [#69122][#69122] -- Introduced checks on [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages so that the managed repo can update the cluster-ui version. [#69205][#69205] -- When using `COPY FROM .. BINARY`, the correct format code will now be returned. [#69066][#69066] -- Previously, `COPY FROM ... BINARY` would return an error if the input data was split across different messages. This is now fixed. [#69066][#69066] -- Previously, `COPY FROM ... CSV` would require each `CopyData` message to be split at the boundary of a record. This was a bug since the `COPY` protocol would allow messages to be split at arbitrary points. This is now fixed. [#69066][#69066] -- Previously, `COPY FROM ... CSV` did not correctly handle octal byte escape sequences such as `\011` when using a `BYTEA` column. This is now fixed. [#69066][#69066] -- Fixed a bug that caused internal errors with set operations, like `UNION`, and columns with tuple types that contained constant `NULL` values. This bug was introduced in v20.2. [#68627][#68627] -- Fixed a crash when using the `cockroach backup debug` tool. [#69251][#69251] -- Previously, after a temporary node outage, other nodes in the cluster could fail to connect to the restarted node due to their circuit breakers not resetting. This would manifest in the logs via messages "unable to dial nXX: breaker open", where `XX` is the ID of the restarted node. (Note that such errors are expected for nodes that are truly unreachable, and may still occur around the time of the restart, but for no longer than a few seconds). This is now fixed. [#69405][#69405] -- Previously, table stats collection issued via an `ANALYZE` or [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v21.2/create-statistics) statement without specifying `AS OF SYSTEM TIME` option could run into `flow: memory budget exceeded`. This has been fixed. [#69483][#69483] -- Fixed a bug where [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import) internal retries (i.e., due to node failures) might not pick up the latest progress updates. [#68218][#68218] -- Fixed a bug where the summary displayed after an [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import) command would sometimes be inaccurate due to retries. [#68218][#68218] -- Long-running `ANALYZE` statements will no longer result in GC TTL errors. [#68929][#68929] - -

Performance improvements

- -- Intent resolution for transactions that write many intents such that we track intent ranges, for the purpose of intent resolution, is much faster (potentially 100x) when using the separated lock table. [#66268][#66268] -- Updated the [optimizer cost model](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) so that all else being equal, the optimizer prefers plans in which `LIMIT `operators are pushed as far down the tree as possible. This can reduce the number of rows that need to be processed by higher operators in the plan tree, thus improving performance. [#69688][#69688] -- QPS-based replica rebalancing is now aware of different constraints placed on different [replication zones](https://www.cockroachlabs.com/docs/v21.2/configure-replication-zones). This means that heterogeneously loaded replication zones (for instance, regions) will achieve a more even distribution of QPS within the stores inside each of these zone. [#65379][#65379] -- Some additional expressions using the `<@` (contained by) and `@>` (contains) operators now support index-acceleration with the indexed column on either side of the expression. [#61219][#61219] -- Some additional expressions using the `<@` (contained by) and `@>` (contains) operators now support index-acceleration with the indexed column on either side of the expression. [#61817][#61817] -- Columns that are held constant in partial index predicates can now be produced when scanning the partial index. This eliminates unnecessary primary index joins to retrieve those constant columns in some queries, resulting in lower latency. [#62406][#62406] -- Inverted joins using `<@` (contained by) and `@>` (contains) operators are now supported with the indexed column on either side of the expression. [#62626][#62626] -- The optimizer now folds functions to `NULL` when the function does not allow `NULL` arguments and one of the arguments is a `NULL` constant. As a result, more efficient query plans will be produced for queries with these types of function calls. [#62924][#62924] -- Expressions with the `->` (fetch val) operator on the left side of either `<@` (contained by) or `@>` (contains) now support index-acceleration. [#63048][#63048] -- SQL will now emit `GetRequests` when possible to KV instead of always emitting `ScanRequests`. This manifests as a modest performance improvement for some workloads. [#61583][#61583] -- Set operations (`UNION`, `UNION ALL`, `INTERSECT`, `INTERSECT ALL`, `EXCEPT`, and `EXCEPT ALL`) can now maintain ordering if both inputs are ordered on the desired output ordering. This can eliminate unnecessary sort operations and improve performance. [#63805][#63805] -- Reduced memory usage in some write-heavy workloads. [#64222][#64222] -- Increased the intelligence of the optimizer around the ability of a scan to provide certain requested orderings when some of the columns are held constant. This can eliminate unneeded sort operations in some cases, resulting in improved performance. [#64254][#64254] -- Increased the intelligence of the optimizer around orderings that can be provided by certain relational expressions when some columns are constant or there are equalities between columns. This can allow the optimizer to plan merge joins, streaming group bys, and streaming set operations in more cases, resulting in improved performance. [#64501][#64501] -- Increased the intelligence of the optimizer around orderings that can be provided by certain relational expressions when there are equalities between columns. This can allow the optimizer to remove unnecessary sort operations in some cases, thus improving performance. [#64593][#64593] -- Improved the performance for distributed queries that need to send a lot of data of certain datatypes across the network. [#64169][#64169] -- Peak memory usage in the lock table is now significantly reduced. Runaway CPU usage due to wasted quadratic time complexity in clearing unclearable locks is addressed. [#64102][#64102] -- The selectivity of query filters with `OR` expressions is now calculated more accurately during query optimization, improving query plans in some cases. [#64886][#64886] -- Validation of a new `UNIQUE` index in a `REGIONAL BY ROW` table no longer requires an inefficient and memory-intensive hash aggregation query. The optimizer can now plans the validation query so that it uses all streaming operations, which are much more efficient. [#65355][#65355] -- A limited scan now checks for conflicting locks in an optimistic manner, which means it will not conflict with locks (typically unreplicated locks) that were held in the scan's full spans, but were not in the spans that were scanned until the limit was reached. This behavior can be turned off by changing the value of the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `kv.concurrency.optimistic_eval_limited_scans.enabled` to `false`. [#58670][#58670] -- Queries that produce a lot of rows in the result usually will now run faster when executed via the [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.2/vectorized-execution). [#65289][#65289] -- Inner, Left, and Semi joins involving `REGIONAL BY ROW` tables can now take advantage of locality-optimized search. This optimization allows lookup joins to avoid communicating with remote nodes if a lookup is known to produce at most one match per input row, and all matches are found locally. This can reduce query latency. [#65784][#65784] -- Improved the performance of `has_table_privilege` by using an internal cache for performing privilege lookups. [#65766][#65766] -- Improved the performance of `has_any_column_privilege` by removing some internal queries. [#65766][#65766] -- Improved the performance of `has_column_privilege` by removing excessive queries. [#65766][#65766] -- When admission control is enabled, work sent to the KV layer is subject to admission control that takes into account write overload in the storage engines. [#65850][#65850] -- Regexp expressions that restrict values to a prefix (e.g., `x ^ '^foo'`) now result in better plans if there is a suitable index. [#66441][#66441] -- The optimizer can now create query plans that use streaming set operations, even when no ordering is required by the query. Streaming set operations are more efficient than the alternative hash set operations, because they avoid the overhead of building a hash table. This can result in improved performance for queries containing the set operations `UNION`, `INTERSECT`, `INTERSECT ALL`, `EXCEPT`, and `EXCEPT ALL`. (`UNION ALL` does not benefit from this optimization.) [#64953][#64953] -- A limited scan now checks for conflicting latches in an optimistic manner, which means it will not conflict with latches that were held in the scan's full spans, but were not in the spans that were scanned until the limit was reached. This behavior can be turned off (along with optimistic locking) by changing the value of the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `kv.concurrency.optimistic_eval_limited_scans.enabled` to `false`. [#66059][#66059] -- The optimizer is now less likely to create query plans that require buffering a large number of rows in memory. This can improve performance by reducing memory pressure and reducing the likelihood that execution operators will need to spill to disk. [#66559][#66559] -- Validation of a new partial `UNIQUE` index in a `REGIONAL BY ROW` table no longer requires an inefficient and memory-intensive hash aggregation query. The optimizer can now plan the validation query so that it uses all streaming operations, which are much more efficient. [#66565][#66565] -- Fixed a performance regression that made the in-memory vectorized sorter slower than the row-engine sorter when the input had decimal columns. [#66807][#66807] -- Increased the default value for the `kv.transaction.max_intents_bytes` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) from 256KB to 4MB to improve transaction performance for large transactions at the expense of increased memory usage. Transactions above this size limit use slower cleanup mode for commits and aborts. [#66859][#66859] -- Adjusted optimizer cost to include CPU cost of `lookupExprs` when used by lookup joins. When a lookup join is chosen we can use two strategies: a simple 1-column lookup or a more involved multi-column lookup that makes more efficient use of indexes but has higher CPU cost. This change makes the cost model reflect that extra cost. [#66786][#66786] -- Improved the efficiency of validation for some partial unique indexes in REGIONAL BY ROW tables by improving the query plan to use all streaming operations. [#67263][#67263] -- The latency of authenticating a user has been improved by adding a cache for lookups of authentication related information. [#66919][#66919] -- Improved the optimizer's cardinality estimations for `enum` columns, including the `crdb_region` column in `REGIONAL BY ROW` tables, as well as all other columns with user-defined types. This may result in the optimizer choosing a better query plan in some cases. [#67374][#67374] -- Eliminated a round-trip when running most jobs. [#67671][#67671] -- The performance of queries returning many arrays has been improved. [#66941][#66941] -- Improved concurrency control for heavily contended write queries outside of transactions that touch multiple ranges, reducing excessive aborts and retries. [#67215][#67215] -- The optimizer can now decorrelate queries that have a limit on the right (uncorrelated) input of a lateral join when the limit is greater than one. [#68299][#68299] -- Sort performance has been improved when sorting columns of type `STRING`, `BYTES`, or `UUID`. [#67451][#67451] -- Lookup joins on partial indexes with virtual columns are no considered by the optimizer, resulting in more efficient query plans in some cases. [#68568][#68568] -- The `COCKROACHDB_REGISTRY` file used for encryption-at-rest will be replaced with a `COCKROACHDB_ENCRYPTION_REGISTRY`, which can be written to in a more efficient manner. [#67320][#67320] -- Reduce memory usage slightly during `ANALYZE` or [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v21.2/create-statistics) statements. [#69051][#69051] -- The optimizer more accurately costs streaming group-by operators. As a result, more efficient query plans should be chosen in some cases. [#68922][#68922] -- If the query is executed locally and needs to perform reads from multiple remote leaseholders, those remote reads might now be done faster. This is especially likely for the case of locality optimized search when there is a local region miss. [#68679][#68679] -- Improved the histogram construction logic so that histograms for columns with a large number of distinct values are more accurate. This can result in better cardinality estimates in the optimizer and enable the optimizer to choose better query plans. [#68698][#68698] -- Locality-optimized search is now supported for scans that are guaranteed to return 100,000 keys or less. This optimization allows the execution engine to avoid visiting remote regions if all requested keys are found in the local region, thus reducing the latency of the query. [#69395][#69395] - -

Build changes

- -- Added go-swagger dependency. Updated Makefile to call it to rebuild spec in `docs/generated/swagger/`, which will eventually be used for API docs. [#62560][#62560] - -
- -

Contributors

- -This release includes 2750 merged PRs by 229 authors. -We would like to thank the following contributors from the CockroachDB community: - -- AJ (first-time contributor) -- Alan Acosta (first-time contributor) -- Aleksandr Fedorov (first-time contributor) -- Callum Neenan (first-time contributor) -- Catherine J (first-time contributor) -- David López -- Eugene Kalinin -- Ganeshprasad Biradar (first-time contributor) -- Jane Xing (first-time contributor) -- Janusz Marcinkiewicz (first-time contributor) -- Jonathan Albrecht (first-time contributor) -- Julien Levesy (first-time contributor) -- Justin Lowery (first-time contributor) -- K Rain Leander (first-time contributor) -- Keith McClellan (first-time contributor) -- Kumar Akshay -- Lakshmi Kannan (first-time contributor) -- Lauren Barker (first-time contributor) -- Masahiro Ikeda (first-time contributor) -- Max Neverov -- Miguel Novelo (first-time contributor) -- Mohammad Aziz (first-time contributor) -- Mohit Agarwal (first-time contributor) -- Nikola N (first-time contributor) -- OrlovM (first-time contributor) -- Rupesh Harode (first-time contributor) -- Sam X Smith (first-time contributor) -- Shivam Agrawal (first-time contributor) -- Sumit Tembe (first-time contributor) -- Tharun -- Wilson Meng (first-time contributor) -- Zhou Xing (first-time contributor) -- Zijie Lu (first-time contributor) -- aayush (first-time contributor) -- ajstorm (first-time contributor) -- auxten (first-time contributor) -- e-mbrown (first-time contributor) -- joesankey (first-time contributor) -- kurokochin (first-time contributor) -- linyimin (first-time contributor) -- oeph (first-time contributor) -- rharding6373 (first-time contributor) -- seekingua (first-time contributor) -- snyk-bot (first-time contributor) -- yangxuan (first-time contributor) -- zhangwei.95 (first-time contributor) - -
- -[#65379]: https://github.com/cockroachdb/cockroach/pull/65379 -[#66268]: https://github.com/cockroachdb/cockroach/pull/66268 -[#66312]: https://github.com/cockroachdb/cockroach/pull/66312 -[#66901]: https://github.com/cockroachdb/cockroach/pull/66901 -[#67936]: https://github.com/cockroachdb/cockroach/pull/67936 -[#68551]: https://github.com/cockroachdb/cockroach/pull/68551 -[#68983]: https://github.com/cockroachdb/cockroach/pull/68983 -[#69018]: https://github.com/cockroachdb/cockroach/pull/69018 -[#69167]: https://github.com/cockroachdb/cockroach/pull/69167 -[#69217]: https://github.com/cockroachdb/cockroach/pull/69217 -[#69328]: https://github.com/cockroachdb/cockroach/pull/69328 -[#69371]: https://github.com/cockroachdb/cockroach/pull/69371 -[#69382]: https://github.com/cockroachdb/cockroach/pull/69382 -[#69422]: https://github.com/cockroachdb/cockroach/pull/69422 -[#69554]: https://github.com/cockroachdb/cockroach/pull/69554 -[#69571]: https://github.com/cockroachdb/cockroach/pull/69571 -[#69594]: https://github.com/cockroachdb/cockroach/pull/69594 -[#69620]: https://github.com/cockroachdb/cockroach/pull/69620 -[#69626]: https://github.com/cockroachdb/cockroach/pull/69626 -[#69638]: https://github.com/cockroachdb/cockroach/pull/69638 -[#69641]: https://github.com/cockroachdb/cockroach/pull/69641 -[#69651]: https://github.com/cockroachdb/cockroach/pull/69651 -[#69667]: https://github.com/cockroachdb/cockroach/pull/69667 -[#69671]: https://github.com/cockroachdb/cockroach/pull/69671 -[#69674]: https://github.com/cockroachdb/cockroach/pull/69674 -[#69681]: https://github.com/cockroachdb/cockroach/pull/69681 -[#69688]: https://github.com/cockroachdb/cockroach/pull/69688 -[#69696]: https://github.com/cockroachdb/cockroach/pull/69696 -[#69698]: https://github.com/cockroachdb/cockroach/pull/69698 -[#69711]: https://github.com/cockroachdb/cockroach/pull/69711 -[#69722]: https://github.com/cockroachdb/cockroach/pull/69722 -[#69727]: https://github.com/cockroachdb/cockroach/pull/69727 -[#69730]: https://github.com/cockroachdb/cockroach/pull/69730 -[#69775]: https://github.com/cockroachdb/cockroach/pull/69775 -[#69787]: https://github.com/cockroachdb/cockroach/pull/69787 -[#69788]: https://github.com/cockroachdb/cockroach/pull/69788 -[#69804]: https://github.com/cockroachdb/cockroach/pull/69804 -[#69812]: https://github.com/cockroachdb/cockroach/pull/69812 -[#69813]: https://github.com/cockroachdb/cockroach/pull/69813 -[#69814]: https://github.com/cockroachdb/cockroach/pull/69814 -[#69857]: https://github.com/cockroachdb/cockroach/pull/69857 -[0cb53f31d]: https://github.com/cockroachdb/cockroach/commit/0cb53f31d -[10d10e84a]: https://github.com/cockroachdb/cockroach/commit/10d10e84a -[247332bc1]: https://github.com/cockroachdb/cockroach/commit/247332bc1 -[5effb91f6]: https://github.com/cockroachdb/cockroach/commit/5effb91f6 -[683f4c592]: https://github.com/cockroachdb/cockroach/commit/683f4c592 -[99157d973]: https://github.com/cockroachdb/cockroach/commit/99157d973 -[b3a9b702b]: https://github.com/cockroachdb/cockroach/commit/b3a9b702b -[b6fb0e626]: https://github.com/cockroachdb/cockroach/commit/b6fb0e626 -[#57590]: https://github.com/cockroachdb/cockroach/pull/57590 -[#58670]: https://github.com/cockroachdb/cockroach/pull/58670 -[#59079]: https://github.com/cockroachdb/cockroach/pull/59079 -[#60835]: https://github.com/cockroachdb/cockroach/pull/60835 -[#60946]: https://github.com/cockroachdb/cockroach/pull/60946 -[#61131]: https://github.com/cockroachdb/cockroach/pull/61131 -[#61219]: https://github.com/cockroachdb/cockroach/pull/61219 -[#61417]: https://github.com/cockroachdb/cockroach/pull/61417 -[#61439]: https://github.com/cockroachdb/cockroach/pull/61439 -[#61582]: https://github.com/cockroachdb/cockroach/pull/61582 -[#61583]: https://github.com/cockroachdb/cockroach/pull/61583 -[#61593]: https://github.com/cockroachdb/cockroach/pull/61593 -[#61602]: https://github.com/cockroachdb/cockroach/pull/61602 -[#61610]: https://github.com/cockroachdb/cockroach/pull/61610 -[#61629]: https://github.com/cockroachdb/cockroach/pull/61629 -[#61740]: https://github.com/cockroachdb/cockroach/pull/61740 -[#61771]: https://github.com/cockroachdb/cockroach/pull/61771 -[#61817]: https://github.com/cockroachdb/cockroach/pull/61817 -[#61862]: https://github.com/cockroachdb/cockroach/pull/61862 -[#61968]: https://github.com/cockroachdb/cockroach/pull/61968 -[#62053]: https://github.com/cockroachdb/cockroach/pull/62053 -[#62076]: https://github.com/cockroachdb/cockroach/pull/62076 -[#62164]: https://github.com/cockroachdb/cockroach/pull/62164 -[#62257]: https://github.com/cockroachdb/cockroach/pull/62257 -[#62294]: https://github.com/cockroachdb/cockroach/pull/62294 -[#62305]: https://github.com/cockroachdb/cockroach/pull/62305 -[#62308]: https://github.com/cockroachdb/cockroach/pull/62308 -[#62377]: https://github.com/cockroachdb/cockroach/pull/62377 -[#62381]: https://github.com/cockroachdb/cockroach/pull/62381 -[#62406]: https://github.com/cockroachdb/cockroach/pull/62406 -[#62411]: https://github.com/cockroachdb/cockroach/pull/62411 -[#62435]: https://github.com/cockroachdb/cockroach/pull/62435 -[#62440]: https://github.com/cockroachdb/cockroach/pull/62440 -[#62448]: https://github.com/cockroachdb/cockroach/pull/62448 -[#62452]: https://github.com/cockroachdb/cockroach/pull/62452 -[#62465]: https://github.com/cockroachdb/cockroach/pull/62465 -[#62483]: https://github.com/cockroachdb/cockroach/pull/62483 -[#62496]: https://github.com/cockroachdb/cockroach/pull/62496 -[#62560]: https://github.com/cockroachdb/cockroach/pull/62560 -[#62570]: https://github.com/cockroachdb/cockroach/pull/62570 -[#62580]: https://github.com/cockroachdb/cockroach/pull/62580 -[#62626]: https://github.com/cockroachdb/cockroach/pull/62626 -[#62628]: https://github.com/cockroachdb/cockroach/pull/62628 -[#62661]: https://github.com/cockroachdb/cockroach/pull/62661 -[#62662]: https://github.com/cockroachdb/cockroach/pull/62662 -[#62695]: https://github.com/cockroachdb/cockroach/pull/62695 -[#62703]: https://github.com/cockroachdb/cockroach/pull/62703 -[#62744]: https://github.com/cockroachdb/cockroach/pull/62744 -[#62764]: https://github.com/cockroachdb/cockroach/pull/62764 -[#62819]: https://github.com/cockroachdb/cockroach/pull/62819 -[#62829]: https://github.com/cockroachdb/cockroach/pull/62829 -[#62836]: https://github.com/cockroachdb/cockroach/pull/62836 -[#62924]: https://github.com/cockroachdb/cockroach/pull/62924 -[#62980]: https://github.com/cockroachdb/cockroach/pull/62980 -[#62989]: https://github.com/cockroachdb/cockroach/pull/62989 -[#63000]: https://github.com/cockroachdb/cockroach/pull/63000 -[#63048]: https://github.com/cockroachdb/cockroach/pull/63048 -[#63072]: https://github.com/cockroachdb/cockroach/pull/63072 -[#63100]: https://github.com/cockroachdb/cockroach/pull/63100 -[#63137]: https://github.com/cockroachdb/cockroach/pull/63137 -[#63173]: https://github.com/cockroachdb/cockroach/pull/63173 -[#63181]: https://github.com/cockroachdb/cockroach/pull/63181 -[#63217]: https://github.com/cockroachdb/cockroach/pull/63217 -[#63309]: https://github.com/cockroachdb/cockroach/pull/63309 -[#63319]: https://github.com/cockroachdb/cockroach/pull/63319 -[#63342]: https://github.com/cockroachdb/cockroach/pull/63342 -[#63343]: https://github.com/cockroachdb/cockroach/pull/63343 -[#63351]: https://github.com/cockroachdb/cockroach/pull/63351 -[#63353]: https://github.com/cockroachdb/cockroach/pull/63353 -[#63384]: https://github.com/cockroachdb/cockroach/pull/63384 -[#63412]: https://github.com/cockroachdb/cockroach/pull/63412 -[#63488]: https://github.com/cockroachdb/cockroach/pull/63488 -[#63492]: https://github.com/cockroachdb/cockroach/pull/63492 -[#63671]: https://github.com/cockroachdb/cockroach/pull/63671 -[#63677]: https://github.com/cockroachdb/cockroach/pull/63677 -[#63678]: https://github.com/cockroachdb/cockroach/pull/63678 -[#63725]: https://github.com/cockroachdb/cockroach/pull/63725 -[#63747]: https://github.com/cockroachdb/cockroach/pull/63747 -[#63748]: https://github.com/cockroachdb/cockroach/pull/63748 -[#63770]: https://github.com/cockroachdb/cockroach/pull/63770 -[#63799]: https://github.com/cockroachdb/cockroach/pull/63799 -[#63805]: https://github.com/cockroachdb/cockroach/pull/63805 -[#63822]: https://github.com/cockroachdb/cockroach/pull/63822 -[#63832]: https://github.com/cockroachdb/cockroach/pull/63832 -[#63839]: https://github.com/cockroachdb/cockroach/pull/63839 -[#63897]: https://github.com/cockroachdb/cockroach/pull/63897 -[#63900]: https://github.com/cockroachdb/cockroach/pull/63900 -[#63935]: https://github.com/cockroachdb/cockroach/pull/63935 -[#63953]: https://github.com/cockroachdb/cockroach/pull/63953 -[#63956]: https://github.com/cockroachdb/cockroach/pull/63956 -[#63959]: https://github.com/cockroachdb/cockroach/pull/63959 -[#64032]: https://github.com/cockroachdb/cockroach/pull/64032 -[#64035]: https://github.com/cockroachdb/cockroach/pull/64035 -[#64076]: https://github.com/cockroachdb/cockroach/pull/64076 -[#64087]: https://github.com/cockroachdb/cockroach/pull/64087 -[#64102]: https://github.com/cockroachdb/cockroach/pull/64102 -[#64137]: https://github.com/cockroachdb/cockroach/pull/64137 -[#64157]: https://github.com/cockroachdb/cockroach/pull/64157 -[#64169]: https://github.com/cockroachdb/cockroach/pull/64169 -[#64183]: https://github.com/cockroachdb/cockroach/pull/64183 -[#64199]: https://github.com/cockroachdb/cockroach/pull/64199 -[#64222]: https://github.com/cockroachdb/cockroach/pull/64222 -[#64225]: https://github.com/cockroachdb/cockroach/pull/64225 -[#64239]: https://github.com/cockroachdb/cockroach/pull/64239 -[#64254]: https://github.com/cockroachdb/cockroach/pull/64254 -[#64260]: https://github.com/cockroachdb/cockroach/pull/64260 -[#64285]: https://github.com/cockroachdb/cockroach/pull/64285 -[#64374]: https://github.com/cockroachdb/cockroach/pull/64374 -[#64381]: https://github.com/cockroachdb/cockroach/pull/64381 -[#64442]: https://github.com/cockroachdb/cockroach/pull/64442 -[#64501]: https://github.com/cockroachdb/cockroach/pull/64501 -[#64593]: https://github.com/cockroachdb/cockroach/pull/64593 -[#64605]: https://github.com/cockroachdb/cockroach/pull/64605 -[#64614]: https://github.com/cockroachdb/cockroach/pull/64614 -[#64672]: https://github.com/cockroachdb/cockroach/pull/64672 -[#64695]: https://github.com/cockroachdb/cockroach/pull/64695 -[#64701]: https://github.com/cockroachdb/cockroach/pull/64701 -[#64731]: https://github.com/cockroachdb/cockroach/pull/64731 -[#64737]: https://github.com/cockroachdb/cockroach/pull/64737 -[#64748]: https://github.com/cockroachdb/cockroach/pull/64748 -[#64758]: https://github.com/cockroachdb/cockroach/pull/64758 -[#64785]: https://github.com/cockroachdb/cockroach/pull/64785 -[#64788]: https://github.com/cockroachdb/cockroach/pull/64788 -[#64831]: https://github.com/cockroachdb/cockroach/pull/64831 -[#64879]: https://github.com/cockroachdb/cockroach/pull/64879 -[#64886]: https://github.com/cockroachdb/cockroach/pull/64886 -[#64887]: https://github.com/cockroachdb/cockroach/pull/64887 -[#64893]: https://github.com/cockroachdb/cockroach/pull/64893 -[#64908]: https://github.com/cockroachdb/cockroach/pull/64908 -[#64951]: https://github.com/cockroachdb/cockroach/pull/64951 -[#64953]: https://github.com/cockroachdb/cockroach/pull/64953 -[#64956]: https://github.com/cockroachdb/cockroach/pull/64956 -[#64977]: https://github.com/cockroachdb/cockroach/pull/64977 -[#64996]: https://github.com/cockroachdb/cockroach/pull/64996 -[#64997]: https://github.com/cockroachdb/cockroach/pull/64997 -[#65001]: https://github.com/cockroachdb/cockroach/pull/65001 -[#65014]: https://github.com/cockroachdb/cockroach/pull/65014 -[#65015]: https://github.com/cockroachdb/cockroach/pull/65015 -[#65018]: https://github.com/cockroachdb/cockroach/pull/65018 -[#65024]: https://github.com/cockroachdb/cockroach/pull/65024 -[#65099]: https://github.com/cockroachdb/cockroach/pull/65099 -[#65101]: https://github.com/cockroachdb/cockroach/pull/65101 -[#65126]: https://github.com/cockroachdb/cockroach/pull/65126 -[#65154]: https://github.com/cockroachdb/cockroach/pull/65154 -[#65188]: https://github.com/cockroachdb/cockroach/pull/65188 -[#65189]: https://github.com/cockroachdb/cockroach/pull/65189 -[#65209]: https://github.com/cockroachdb/cockroach/pull/65209 -[#65278]: https://github.com/cockroachdb/cockroach/pull/65278 -[#65289]: https://github.com/cockroachdb/cockroach/pull/65289 -[#65307]: https://github.com/cockroachdb/cockroach/pull/65307 -[#65321]: https://github.com/cockroachdb/cockroach/pull/65321 -[#65322]: https://github.com/cockroachdb/cockroach/pull/65322 -[#65324]: https://github.com/cockroachdb/cockroach/pull/65324 -[#65340]: https://github.com/cockroachdb/cockroach/pull/65340 -[#65355]: https://github.com/cockroachdb/cockroach/pull/65355 -[#65377]: https://github.com/cockroachdb/cockroach/pull/65377 -[#65396]: https://github.com/cockroachdb/cockroach/pull/65396 -[#65397]: https://github.com/cockroachdb/cockroach/pull/65397 -[#65413]: https://github.com/cockroachdb/cockroach/pull/65413 -[#65420]: https://github.com/cockroachdb/cockroach/pull/65420 -[#65431]: https://github.com/cockroachdb/cockroach/pull/65431 -[#65432]: https://github.com/cockroachdb/cockroach/pull/65432 -[#65445]: https://github.com/cockroachdb/cockroach/pull/65445 -[#65460]: https://github.com/cockroachdb/cockroach/pull/65460 -[#65495]: https://github.com/cockroachdb/cockroach/pull/65495 -[#65550]: https://github.com/cockroachdb/cockroach/pull/65550 -[#65592]: https://github.com/cockroachdb/cockroach/pull/65592 -[#65614]: https://github.com/cockroachdb/cockroach/pull/65614 -[#65620]: https://github.com/cockroachdb/cockroach/pull/65620 -[#65633]: https://github.com/cockroachdb/cockroach/pull/65633 -[#65634]: https://github.com/cockroachdb/cockroach/pull/65634 -[#65653]: https://github.com/cockroachdb/cockroach/pull/65653 -[#65661]: https://github.com/cockroachdb/cockroach/pull/65661 -[#65683]: https://github.com/cockroachdb/cockroach/pull/65683 -[#65717]: https://github.com/cockroachdb/cockroach/pull/65717 -[#65727]: https://github.com/cockroachdb/cockroach/pull/65727 -[#65766]: https://github.com/cockroachdb/cockroach/pull/65766 -[#65768]: https://github.com/cockroachdb/cockroach/pull/65768 -[#65775]: https://github.com/cockroachdb/cockroach/pull/65775 -[#65782]: https://github.com/cockroachdb/cockroach/pull/65782 -[#65784]: https://github.com/cockroachdb/cockroach/pull/65784 -[#65850]: https://github.com/cockroachdb/cockroach/pull/65850 -[#65854]: https://github.com/cockroachdb/cockroach/pull/65854 -[#65871]: https://github.com/cockroachdb/cockroach/pull/65871 -[#65902]: https://github.com/cockroachdb/cockroach/pull/65902 -[#65909]: https://github.com/cockroachdb/cockroach/pull/65909 -[#65914]: https://github.com/cockroachdb/cockroach/pull/65914 -[#65950]: https://github.com/cockroachdb/cockroach/pull/65950 -[#65953]: https://github.com/cockroachdb/cockroach/pull/65953 -[#65956]: https://github.com/cockroachdb/cockroach/pull/65956 -[#66002]: https://github.com/cockroachdb/cockroach/pull/66002 -[#66013]: https://github.com/cockroachdb/cockroach/pull/66013 -[#66020]: https://github.com/cockroachdb/cockroach/pull/66020 -[#66025]: https://github.com/cockroachdb/cockroach/pull/66025 -[#66033]: https://github.com/cockroachdb/cockroach/pull/66033 -[#66059]: https://github.com/cockroachdb/cockroach/pull/66059 -[#66065]: https://github.com/cockroachdb/cockroach/pull/66065 -[#66146]: https://github.com/cockroachdb/cockroach/pull/66146 -[#66147]: https://github.com/cockroachdb/cockroach/pull/66147 -[#66157]: https://github.com/cockroachdb/cockroach/pull/66157 -[#66196]: https://github.com/cockroachdb/cockroach/pull/66196 -[#66221]: https://github.com/cockroachdb/cockroach/pull/66221 -[#66225]: https://github.com/cockroachdb/cockroach/pull/66225 -[#66253]: https://github.com/cockroachdb/cockroach/pull/66253 -[#66258]: https://github.com/cockroachdb/cockroach/pull/66258 -[#66264]: https://github.com/cockroachdb/cockroach/pull/66264 -[#66277]: https://github.com/cockroachdb/cockroach/pull/66277 -[#66337]: https://github.com/cockroachdb/cockroach/pull/66337 -[#66345]: https://github.com/cockroachdb/cockroach/pull/66345 -[#66359]: https://github.com/cockroachdb/cockroach/pull/66359 -[#66362]: https://github.com/cockroachdb/cockroach/pull/66362 -[#66366]: https://github.com/cockroachdb/cockroach/pull/66366 -[#66370]: https://github.com/cockroachdb/cockroach/pull/66370 -[#66374]: https://github.com/cockroachdb/cockroach/pull/66374 -[#66375]: https://github.com/cockroachdb/cockroach/pull/66375 -[#66376]: https://github.com/cockroachdb/cockroach/pull/66376 -[#66399]: https://github.com/cockroachdb/cockroach/pull/66399 -[#66417]: https://github.com/cockroachdb/cockroach/pull/66417 -[#66422]: https://github.com/cockroachdb/cockroach/pull/66422 -[#66427]: https://github.com/cockroachdb/cockroach/pull/66427 -[#66441]: https://github.com/cockroachdb/cockroach/pull/66441 -[#66464]: https://github.com/cockroachdb/cockroach/pull/66464 -[#66495]: https://github.com/cockroachdb/cockroach/pull/66495 -[#66497]: https://github.com/cockroachdb/cockroach/pull/66497 -[#66535]: https://github.com/cockroachdb/cockroach/pull/66535 -[#66559]: https://github.com/cockroachdb/cockroach/pull/66559 -[#66565]: https://github.com/cockroachdb/cockroach/pull/66565 -[#66569]: https://github.com/cockroachdb/cockroach/pull/66569 -[#66578]: https://github.com/cockroachdb/cockroach/pull/66578 -[#66582]: https://github.com/cockroachdb/cockroach/pull/66582 -[#66595]: https://github.com/cockroachdb/cockroach/pull/66595 -[#66599]: https://github.com/cockroachdb/cockroach/pull/66599 -[#66605]: https://github.com/cockroachdb/cockroach/pull/66605 -[#66625]: https://github.com/cockroachdb/cockroach/pull/66625 -[#66629]: https://github.com/cockroachdb/cockroach/pull/66629 -[#66640]: https://github.com/cockroachdb/cockroach/pull/66640 -[#66645]: https://github.com/cockroachdb/cockroach/pull/66645 -[#66675]: https://github.com/cockroachdb/cockroach/pull/66675 -[#66679]: https://github.com/cockroachdb/cockroach/pull/66679 -[#66687]: https://github.com/cockroachdb/cockroach/pull/66687 -[#66688]: https://github.com/cockroachdb/cockroach/pull/66688 -[#66689]: https://github.com/cockroachdb/cockroach/pull/66689 -[#66734]: https://github.com/cockroachdb/cockroach/pull/66734 -[#66738]: https://github.com/cockroachdb/cockroach/pull/66738 -[#66760]: https://github.com/cockroachdb/cockroach/pull/66760 -[#66782]: https://github.com/cockroachdb/cockroach/pull/66782 -[#66785]: https://github.com/cockroachdb/cockroach/pull/66785 -[#66786]: https://github.com/cockroachdb/cockroach/pull/66786 -[#66793]: https://github.com/cockroachdb/cockroach/pull/66793 -[#66795]: https://github.com/cockroachdb/cockroach/pull/66795 -[#66807]: https://github.com/cockroachdb/cockroach/pull/66807 -[#66815]: https://github.com/cockroachdb/cockroach/pull/66815 -[#66842]: https://github.com/cockroachdb/cockroach/pull/66842 -[#66856]: https://github.com/cockroachdb/cockroach/pull/66856 -[#66859]: https://github.com/cockroachdb/cockroach/pull/66859 -[#66865]: https://github.com/cockroachdb/cockroach/pull/66865 -[#66870]: https://github.com/cockroachdb/cockroach/pull/66870 -[#66889]: https://github.com/cockroachdb/cockroach/pull/66889 -[#66893]: https://github.com/cockroachdb/cockroach/pull/66893 -[#66899]: https://github.com/cockroachdb/cockroach/pull/66899 -[#66914]: https://github.com/cockroachdb/cockroach/pull/66914 -[#66915]: https://github.com/cockroachdb/cockroach/pull/66915 -[#66919]: https://github.com/cockroachdb/cockroach/pull/66919 -[#66936]: https://github.com/cockroachdb/cockroach/pull/66936 -[#66941]: https://github.com/cockroachdb/cockroach/pull/66941 -[#66967]: https://github.com/cockroachdb/cockroach/pull/66967 -[#66969]: https://github.com/cockroachdb/cockroach/pull/66969 -[#66972]: https://github.com/cockroachdb/cockroach/pull/66972 -[#66973]: https://github.com/cockroachdb/cockroach/pull/66973 -[#67000]: https://github.com/cockroachdb/cockroach/pull/67000 -[#67011]: https://github.com/cockroachdb/cockroach/pull/67011 -[#67017]: https://github.com/cockroachdb/cockroach/pull/67017 -[#67022]: https://github.com/cockroachdb/cockroach/pull/67022 -[#67023]: https://github.com/cockroachdb/cockroach/pull/67023 -[#67075]: https://github.com/cockroachdb/cockroach/pull/67075 -[#67080]: https://github.com/cockroachdb/cockroach/pull/67080 -[#67090]: https://github.com/cockroachdb/cockroach/pull/67090 -[#67093]: https://github.com/cockroachdb/cockroach/pull/67093 -[#67094]: https://github.com/cockroachdb/cockroach/pull/67094 -[#67098]: https://github.com/cockroachdb/cockroach/pull/67098 -[#67121]: https://github.com/cockroachdb/cockroach/pull/67121 -[#67168]: https://github.com/cockroachdb/cockroach/pull/67168 -[#67175]: https://github.com/cockroachdb/cockroach/pull/67175 -[#67210]: https://github.com/cockroachdb/cockroach/pull/67210 -[#67215]: https://github.com/cockroachdb/cockroach/pull/67215 -[#67263]: https://github.com/cockroachdb/cockroach/pull/67263 -[#67275]: https://github.com/cockroachdb/cockroach/pull/67275 -[#67281]: https://github.com/cockroachdb/cockroach/pull/67281 -[#67285]: https://github.com/cockroachdb/cockroach/pull/67285 -[#67310]: https://github.com/cockroachdb/cockroach/pull/67310 -[#67319]: https://github.com/cockroachdb/cockroach/pull/67319 -[#67320]: https://github.com/cockroachdb/cockroach/pull/67320 -[#67327]: https://github.com/cockroachdb/cockroach/pull/67327 -[#67331]: https://github.com/cockroachdb/cockroach/pull/67331 -[#67343]: https://github.com/cockroachdb/cockroach/pull/67343 -[#67350]: https://github.com/cockroachdb/cockroach/pull/67350 -[#67355]: https://github.com/cockroachdb/cockroach/pull/67355 -[#67374]: https://github.com/cockroachdb/cockroach/pull/67374 -[#67386]: https://github.com/cockroachdb/cockroach/pull/67386 -[#67389]: https://github.com/cockroachdb/cockroach/pull/67389 -[#67426]: https://github.com/cockroachdb/cockroach/pull/67426 -[#67427]: https://github.com/cockroachdb/cockroach/pull/67427 -[#67431]: https://github.com/cockroachdb/cockroach/pull/67431 -[#67450]: https://github.com/cockroachdb/cockroach/pull/67450 -[#67451]: https://github.com/cockroachdb/cockroach/pull/67451 -[#67478]: https://github.com/cockroachdb/cockroach/pull/67478 -[#67486]: https://github.com/cockroachdb/cockroach/pull/67486 -[#67497]: https://github.com/cockroachdb/cockroach/pull/67497 -[#67509]: https://github.com/cockroachdb/cockroach/pull/67509 -[#67514]: https://github.com/cockroachdb/cockroach/pull/67514 -[#67531]: https://github.com/cockroachdb/cockroach/pull/67531 -[#67533]: https://github.com/cockroachdb/cockroach/pull/67533 -[#67537]: https://github.com/cockroachdb/cockroach/pull/67537 -[#67541]: https://github.com/cockroachdb/cockroach/pull/67541 -[#67547]: https://github.com/cockroachdb/cockroach/pull/67547 -[#67581]: https://github.com/cockroachdb/cockroach/pull/67581 -[#67641]: https://github.com/cockroachdb/cockroach/pull/67641 -[#67649]: https://github.com/cockroachdb/cockroach/pull/67649 -[#67652]: https://github.com/cockroachdb/cockroach/pull/67652 -[#67654]: https://github.com/cockroachdb/cockroach/pull/67654 -[#67671]: https://github.com/cockroachdb/cockroach/pull/67671 -[#67697]: https://github.com/cockroachdb/cockroach/pull/67697 -[#67703]: https://github.com/cockroachdb/cockroach/pull/67703 -[#67705]: https://github.com/cockroachdb/cockroach/pull/67705 -[#67714]: https://github.com/cockroachdb/cockroach/pull/67714 -[#67725]: https://github.com/cockroachdb/cockroach/pull/67725 -[#67764]: https://github.com/cockroachdb/cockroach/pull/67764 -[#67792]: https://github.com/cockroachdb/cockroach/pull/67792 -[#67799]: https://github.com/cockroachdb/cockroach/pull/67799 -[#67813]: https://github.com/cockroachdb/cockroach/pull/67813 -[#67814]: https://github.com/cockroachdb/cockroach/pull/67814 -[#67815]: https://github.com/cockroachdb/cockroach/pull/67815 -[#67837]: https://github.com/cockroachdb/cockroach/pull/67837 -[#67855]: https://github.com/cockroachdb/cockroach/pull/67855 -[#67865]: https://github.com/cockroachdb/cockroach/pull/67865 -[#67866]: https://github.com/cockroachdb/cockroach/pull/67866 -[#67872]: https://github.com/cockroachdb/cockroach/pull/67872 -[#67941]: https://github.com/cockroachdb/cockroach/pull/67941 -[#67947]: https://github.com/cockroachdb/cockroach/pull/67947 -[#67950]: https://github.com/cockroachdb/cockroach/pull/67950 -[#67953]: https://github.com/cockroachdb/cockroach/pull/67953 -[#67970]: https://github.com/cockroachdb/cockroach/pull/67970 -[#67979]: https://github.com/cockroachdb/cockroach/pull/67979 -[#67985]: https://github.com/cockroachdb/cockroach/pull/67985 -[#67986]: https://github.com/cockroachdb/cockroach/pull/67986 -[#67988]: https://github.com/cockroachdb/cockroach/pull/67988 -[#67994]: https://github.com/cockroachdb/cockroach/pull/67994 -[#67997]: https://github.com/cockroachdb/cockroach/pull/67997 -[#68001]: https://github.com/cockroachdb/cockroach/pull/68001 -[#68013]: https://github.com/cockroachdb/cockroach/pull/68013 -[#68018]: https://github.com/cockroachdb/cockroach/pull/68018 -[#68025]: https://github.com/cockroachdb/cockroach/pull/68025 -[#68026]: https://github.com/cockroachdb/cockroach/pull/68026 -[#68034]: https://github.com/cockroachdb/cockroach/pull/68034 -[#68041]: https://github.com/cockroachdb/cockroach/pull/68041 -[#68042]: https://github.com/cockroachdb/cockroach/pull/68042 -[#68045]: https://github.com/cockroachdb/cockroach/pull/68045 -[#68049]: https://github.com/cockroachdb/cockroach/pull/68049 -[#68074]: https://github.com/cockroachdb/cockroach/pull/68074 -[#68076]: https://github.com/cockroachdb/cockroach/pull/68076 -[#68079]: https://github.com/cockroachdb/cockroach/pull/68079 -[#68081]: https://github.com/cockroachdb/cockroach/pull/68081 -[#68093]: https://github.com/cockroachdb/cockroach/pull/68093 -[#68105]: https://github.com/cockroachdb/cockroach/pull/68105 -[#68128]: https://github.com/cockroachdb/cockroach/pull/68128 -[#68137]: https://github.com/cockroachdb/cockroach/pull/68137 -[#68141]: https://github.com/cockroachdb/cockroach/pull/68141 -[#68176]: https://github.com/cockroachdb/cockroach/pull/68176 -[#68182]: https://github.com/cockroachdb/cockroach/pull/68182 -[#68187]: https://github.com/cockroachdb/cockroach/pull/68187 -[#68191]: https://github.com/cockroachdb/cockroach/pull/68191 -[#68192]: https://github.com/cockroachdb/cockroach/pull/68192 -[#68194]: https://github.com/cockroachdb/cockroach/pull/68194 -[#68217]: https://github.com/cockroachdb/cockroach/pull/68217 -[#68218]: https://github.com/cockroachdb/cockroach/pull/68218 -[#68229]: https://github.com/cockroachdb/cockroach/pull/68229 -[#68245]: https://github.com/cockroachdb/cockroach/pull/68245 -[#68252]: https://github.com/cockroachdb/cockroach/pull/68252 -[#68257]: https://github.com/cockroachdb/cockroach/pull/68257 -[#68263]: https://github.com/cockroachdb/cockroach/pull/68263 -[#68280]: https://github.com/cockroachdb/cockroach/pull/68280 -[#68288]: https://github.com/cockroachdb/cockroach/pull/68288 -[#68299]: https://github.com/cockroachdb/cockroach/pull/68299 -[#68313]: https://github.com/cockroachdb/cockroach/pull/68313 -[#68314]: https://github.com/cockroachdb/cockroach/pull/68314 -[#68326]: https://github.com/cockroachdb/cockroach/pull/68326 -[#68337]: https://github.com/cockroachdb/cockroach/pull/68337 -[#68349]: https://github.com/cockroachdb/cockroach/pull/68349 -[#68351]: https://github.com/cockroachdb/cockroach/pull/68351 -[#68352]: https://github.com/cockroachdb/cockroach/pull/68352 -[#68360]: https://github.com/cockroachdb/cockroach/pull/68360 -[#68390]: https://github.com/cockroachdb/cockroach/pull/68390 -[#68391]: https://github.com/cockroachdb/cockroach/pull/68391 -[#68394]: https://github.com/cockroachdb/cockroach/pull/68394 -[#68401]: https://github.com/cockroachdb/cockroach/pull/68401 -[#68426]: https://github.com/cockroachdb/cockroach/pull/68426 -[#68429]: https://github.com/cockroachdb/cockroach/pull/68429 -[#68442]: https://github.com/cockroachdb/cockroach/pull/68442 -[#68446]: https://github.com/cockroachdb/cockroach/pull/68446 -[#68447]: https://github.com/cockroachdb/cockroach/pull/68447 -[#68456]: https://github.com/cockroachdb/cockroach/pull/68456 -[#68468]: https://github.com/cockroachdb/cockroach/pull/68468 -[#68476]: https://github.com/cockroachdb/cockroach/pull/68476 -[#68497]: https://github.com/cockroachdb/cockroach/pull/68497 -[#68500]: https://github.com/cockroachdb/cockroach/pull/68500 -[#68506]: https://github.com/cockroachdb/cockroach/pull/68506 -[#68524]: https://github.com/cockroachdb/cockroach/pull/68524 -[#68540]: https://github.com/cockroachdb/cockroach/pull/68540 -[#68566]: https://github.com/cockroachdb/cockroach/pull/68566 -[#68568]: https://github.com/cockroachdb/cockroach/pull/68568 -[#68595]: https://github.com/cockroachdb/cockroach/pull/68595 -[#68601]: https://github.com/cockroachdb/cockroach/pull/68601 -[#68606]: https://github.com/cockroachdb/cockroach/pull/68606 -[#68607]: https://github.com/cockroachdb/cockroach/pull/68607 -[#68627]: https://github.com/cockroachdb/cockroach/pull/68627 -[#68629]: https://github.com/cockroachdb/cockroach/pull/68629 -[#68633]: https://github.com/cockroachdb/cockroach/pull/68633 -[#68666]: https://github.com/cockroachdb/cockroach/pull/68666 -[#68679]: https://github.com/cockroachdb/cockroach/pull/68679 -[#68698]: https://github.com/cockroachdb/cockroach/pull/68698 -[#68699]: https://github.com/cockroachdb/cockroach/pull/68699 -[#68700]: https://github.com/cockroachdb/cockroach/pull/68700 -[#68706]: https://github.com/cockroachdb/cockroach/pull/68706 -[#68711]: https://github.com/cockroachdb/cockroach/pull/68711 -[#68715]: https://github.com/cockroachdb/cockroach/pull/68715 -[#68749]: https://github.com/cockroachdb/cockroach/pull/68749 -[#68750]: https://github.com/cockroachdb/cockroach/pull/68750 -[#68792]: https://github.com/cockroachdb/cockroach/pull/68792 -[#68807]: https://github.com/cockroachdb/cockroach/pull/68807 -[#68808]: https://github.com/cockroachdb/cockroach/pull/68808 -[#68818]: https://github.com/cockroachdb/cockroach/pull/68818 -[#68831]: https://github.com/cockroachdb/cockroach/pull/68831 -[#68877]: https://github.com/cockroachdb/cockroach/pull/68877 -[#68902]: https://github.com/cockroachdb/cockroach/pull/68902 -[#68909]: https://github.com/cockroachdb/cockroach/pull/68909 -[#68916]: https://github.com/cockroachdb/cockroach/pull/68916 -[#68918]: https://github.com/cockroachdb/cockroach/pull/68918 -[#68922]: https://github.com/cockroachdb/cockroach/pull/68922 -[#68929]: https://github.com/cockroachdb/cockroach/pull/68929 -[#68959]: https://github.com/cockroachdb/cockroach/pull/68959 -[#68967]: https://github.com/cockroachdb/cockroach/pull/68967 -[#68969]: https://github.com/cockroachdb/cockroach/pull/68969 -[#68972]: https://github.com/cockroachdb/cockroach/pull/68972 -[#68973]: https://github.com/cockroachdb/cockroach/pull/68973 -[#68978]: https://github.com/cockroachdb/cockroach/pull/68978 -[#68995]: https://github.com/cockroachdb/cockroach/pull/68995 -[#68997]: https://github.com/cockroachdb/cockroach/pull/68997 -[#69001]: https://github.com/cockroachdb/cockroach/pull/69001 -[#69019]: https://github.com/cockroachdb/cockroach/pull/69019 -[#69046]: https://github.com/cockroachdb/cockroach/pull/69046 -[#69047]: https://github.com/cockroachdb/cockroach/pull/69047 -[#69049]: https://github.com/cockroachdb/cockroach/pull/69049 -[#69051]: https://github.com/cockroachdb/cockroach/pull/69051 -[#69053]: https://github.com/cockroachdb/cockroach/pull/69053 -[#69055]: https://github.com/cockroachdb/cockroach/pull/69055 -[#69066]: https://github.com/cockroachdb/cockroach/pull/69066 -[#69067]: https://github.com/cockroachdb/cockroach/pull/69067 -[#69070]: https://github.com/cockroachdb/cockroach/pull/69070 -[#69087]: https://github.com/cockroachdb/cockroach/pull/69087 -[#69091]: https://github.com/cockroachdb/cockroach/pull/69091 -[#69092]: https://github.com/cockroachdb/cockroach/pull/69092 -[#69106]: https://github.com/cockroachdb/cockroach/pull/69106 -[#69107]: https://github.com/cockroachdb/cockroach/pull/69107 -[#69113]: https://github.com/cockroachdb/cockroach/pull/69113 -[#69114]: https://github.com/cockroachdb/cockroach/pull/69114 -[#69122]: https://github.com/cockroachdb/cockroach/pull/69122 -[#69126]: https://github.com/cockroachdb/cockroach/pull/69126 -[#69152]: https://github.com/cockroachdb/cockroach/pull/69152 -[#69164]: https://github.com/cockroachdb/cockroach/pull/69164 -[#69173]: https://github.com/cockroachdb/cockroach/pull/69173 -[#69185]: https://github.com/cockroachdb/cockroach/pull/69185 -[#69202]: https://github.com/cockroachdb/cockroach/pull/69202 -[#69205]: https://github.com/cockroachdb/cockroach/pull/69205 -[#69224]: https://github.com/cockroachdb/cockroach/pull/69224 -[#69234]: https://github.com/cockroachdb/cockroach/pull/69234 -[#69238]: https://github.com/cockroachdb/cockroach/pull/69238 -[#69243]: https://github.com/cockroachdb/cockroach/pull/69243 -[#69251]: https://github.com/cockroachdb/cockroach/pull/69251 -[#69262]: https://github.com/cockroachdb/cockroach/pull/69262 -[#69273]: https://github.com/cockroachdb/cockroach/pull/69273 -[#69304]: https://github.com/cockroachdb/cockroach/pull/69304 -[#69311]: https://github.com/cockroachdb/cockroach/pull/69311 -[#69318]: https://github.com/cockroachdb/cockroach/pull/69318 -[#69320]: https://github.com/cockroachdb/cockroach/pull/69320 -[#69355]: https://github.com/cockroachdb/cockroach/pull/69355 -[#69370]: https://github.com/cockroachdb/cockroach/pull/69370 -[#69376]: https://github.com/cockroachdb/cockroach/pull/69376 -[#69377]: https://github.com/cockroachdb/cockroach/pull/69377 -[#69381]: https://github.com/cockroachdb/cockroach/pull/69381 -[#69388]: https://github.com/cockroachdb/cockroach/pull/69388 -[#69395]: https://github.com/cockroachdb/cockroach/pull/69395 -[#69405]: https://github.com/cockroachdb/cockroach/pull/69405 -[#69444]: https://github.com/cockroachdb/cockroach/pull/69444 -[#69457]: https://github.com/cockroachdb/cockroach/pull/69457 -[#69469]: https://github.com/cockroachdb/cockroach/pull/69469 -[#69470]: https://github.com/cockroachdb/cockroach/pull/69470 -[#69478]: https://github.com/cockroachdb/cockroach/pull/69478 -[#69480]: https://github.com/cockroachdb/cockroach/pull/69480 -[#69481]: https://github.com/cockroachdb/cockroach/pull/69481 -[#69483]: https://github.com/cockroachdb/cockroach/pull/69483 -[#69491]: https://github.com/cockroachdb/cockroach/pull/69491 -[#69502]: https://github.com/cockroachdb/cockroach/pull/69502 -[#69577]: https://github.com/cockroachdb/cockroach/pull/69577 diff --git a/src/current/_includes/releases/v21.2/v21.2.0-beta.2.md b/src/current/_includes/releases/v21.2/v21.2.0-beta.2.md deleted file mode 100644 index 6052506113b..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.0-beta.2.md +++ /dev/null @@ -1,80 +0,0 @@ -## v21.2.0-beta.2 - -Release Date: September 27, 2021 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Security updates

- -- SQL tenant servers will now use a TLS certificate for their HTTP server when it's present. Previously this server never used TLS. [#70056][#70056] - -

SQL language changes

- -- The query logging enabled by `sql.telemetry.query_sampling.enabled` now avoids considering SQL statements issued internally by CockroachDB itself. [#70358][#70358] -- [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) now supports UDT for default and computed columns. [#70270][#70270] -- [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) [regional by row](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview#regional-by-row-tables) tables is now supported. [#70270][#70270] - -

Operational changes

- -- The meaning of the recently introduced `transaction_rows_written_err` and `transaction_rows_read_err` (as well as the corresponding `_log` variables) have been adjusted to indicate the largest number of rows that is still allowed. In other words, originally reaching the limit would result in an error, and now only exceeding the limit would. [#70014][#70014] - -

Command-line changes

- -- It is now possible to mix and match severity filters for different channels on a single log sink. For example: - - ~~~ - file-groups: monitoring: channels: {WARNING: [OPS, STORAGE], INFO: HEALTH} - ~~~ - - This defines a single file sink "monitoring" which captures all messages from the `HEALTH` channel, and only messages at severity `WARNING` or higher from the `OPS` and `STORAGE` channels. - - Another example: - - ~~~ - file-groups: default: channels: {INFO: all except STORAGE, WARNING: STORAGE} - ~~~ - - This captures all messages on all channels except the `STORAGE` channel, plus the messages at severity `WARNING` or higher from `STORAGE`. Note: the previous syntax remains supported. When `channel` is specified without explicit severities, the `filter` attribute is used as the default (like previously). [#70411][#70411] - -- The default logging configuration now redirects the `HEALTH` logging channel to a distinct log file (`cockroach-health.log`). [#70411][#70411] -- The default logging configuration now redirects the output on the `SQL_SCHEMA` channel to a new separate file group `sql-schema` (`cockroach-sql-schema.log`), and the `PRIVILEGES` and `USER_ADMIN` channels to a new separate file group `security` (`cockroach-security.log`). The new `security` group has the `auditable` flag set. As previously, the administrator can inspect the default configuration with `cockroach debug check-log-config`. [#70411][#70411] -- The server logging configuration now also includes a copy of messages from all logging channels at severity `WARNING` or higher into the default log file. This ensures that severe messages from all channels are also included in the main log file used during troubleshooting. [#70411][#70411] - -

DB Console changes

- -- Added tooltips on the [**Databases** page](https://www.cockroachlabs.com/docs/v21.2/ui-databases-page) and made the SQL box scrollable. [#70070][#70070] -- Added a column selector to the [**Transactions** page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page). [#70286][#70286] -- Updated the [Jobs table](https://www.cockroachlabs.com/docs/v21.2/ui-jobs-page#jobs-list) style to match all other tables on the Console and also updated the column name from `Users` to `User`. [#70449][#70449] - -

Bug fixes

- -- Columns that were hidden by default were not being displayed when selected. This commit fixes the behavior. [#70054][#70054] -- Fixed all broken links to documentation. [#70063][#70063] -- Temporary tables were not properly cleaned up for tenants. This is now fixed. [#70129][#70129] -- DNS unavailability during range 1 leaseholder loss will no longer cause significant latency increases for queries and other operations. [#70135][#70135] -- Last Execution Timestamp is now properly updating. [#70297][#70297] -- Fixed a bug in full cluster restores where dropped descriptor revisions would cause the [restore](https://www.cockroachlabs.com/docs/v21.2/restore) to fail. [#70368][#70368] -- Default columns were displayed on the [**Statements** page](https://www.cockroachlabs.com/docs/cockroachcloud/statements-page) on the CockroachCloud console when the user never made any selection. This is now fixed. [#70206][#70206] -- `cockroach mt start-proxy` now appropriately sets the .ServerName member of outgoing TLS connections. This allows the proxy to function appropriately when the `--insecure` and `--skip-verify` CLI flags are omitted. [#70290][#70290] - -

Contributors

- -This release includes 44 merged PRs by 22 authors. - -[#70014]: https://github.com/cockroachdb/cockroach/pull/70014 -[#70054]: https://github.com/cockroachdb/cockroach/pull/70054 -[#70056]: https://github.com/cockroachdb/cockroach/pull/70056 -[#70063]: https://github.com/cockroachdb/cockroach/pull/70063 -[#70070]: https://github.com/cockroachdb/cockroach/pull/70070 -[#70129]: https://github.com/cockroachdb/cockroach/pull/70129 -[#70135]: https://github.com/cockroachdb/cockroach/pull/70135 -[#70206]: https://github.com/cockroachdb/cockroach/pull/70206 -[#70270]: https://github.com/cockroachdb/cockroach/pull/70270 -[#70286]: https://github.com/cockroachdb/cockroach/pull/70286 -[#70290]: https://github.com/cockroachdb/cockroach/pull/70290 -[#70297]: https://github.com/cockroachdb/cockroach/pull/70297 -[#70341]: https://github.com/cockroachdb/cockroach/pull/70341 -[#70358]: https://github.com/cockroachdb/cockroach/pull/70358 -[#70368]: https://github.com/cockroachdb/cockroach/pull/70368 -[#70411]: https://github.com/cockroachdb/cockroach/pull/70411 -[#70449]: https://github.com/cockroachdb/cockroach/pull/70449 diff --git a/src/current/_includes/releases/v21.2/v21.2.0-beta.3.md b/src/current/_includes/releases/v21.2/v21.2.0-beta.3.md deleted file mode 100644 index d20d6618a8c..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.0-beta.3.md +++ /dev/null @@ -1,39 +0,0 @@ -## v21.2.0-beta.3 - -Release Date: October 4, 2021 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

SQL language changes

- -- [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v21.2/show-jobs) will now include the newly added columns from `crdb_internal.jobs` (`last_run`, `next_run`, `num_runs`, and `execution_errors`). The columns capture state related to retries, failures, and exponential backoff. [#70791][#70791] - -

DB Console changes

- -- On the [Statement Details page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-details-page), CockroachDB now shows info not yet sampled as unavailable, instead of with a value of `0`. [#70569][#70569] -- On the [`EXPLAIN`](https://www.cockroachlabs.com/docs/v21.2/explain) plan tab in the [Statement Details page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-details-page), users can now hover over underlined `EXPLAIN` plan attributes to get tooltips with more information on the attribute. [#70631][#70631] -- Persisted statements are now enabled for tenants. In the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages, users now view the aggregated statistics for statements and transactions over a date range. A date range selector is present in both pages in order to select the range of persisted stats to view. Note that the two pages share a single date range. [#70777][#70777] -- Removed last cleared status from the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page), [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page), and [Transaction Details](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page#transaction-details-page) pages and updated the tooltip on clear SQL stats to indicate it will also clear the persisted data. [#70777][#70777] -- For URLs on the [Statement Details page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-details-page), the app name and database name are now query string parameters. The route to statement details is now definitively `/statement/:implicitTxn/:statement?{queryStringParams}`, e.g., `statement/true/SELECT%20city%2C%20id%20FROM%20vehicles%20WHERE%20city%20%3D%20%241?database=movr&app=movr` [#70804][#70804] - -

Bug fixes

- -- Fixed a problem where the [TPC-C workload](https://www.cockroachlabs.com/docs/v21.2/performance-benchmarking-with-tpcc-small), when used in a [multi-region setup](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview), did not properly assign workers to the local partitions. [#70613][#70613] -- Fixed styling issues in the tooltip text on the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages' table columns. [#70650][#70650] -- Fixed a bug where [`EXPLAIN (VEC)`](https://www.cockroachlabs.com/docs/v21.2/explain#vec-option) on some queries could lead to a crash. The bug was present only in v21.2 [testing releases]({% link releases/index.md %}#testing-releases). [#70524][#70524] -- The [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages are now able to display and reset persisted SQL stats. [#70777][#70777] -- Fixed a bug where the exit status of the [`cockroach` command](https://www.cockroachlabs.com/docs/v21.2/cockroach-commands) did not follow the previously-documented table of exit status codes when an error occurred during the command startup. Only errors occurring after startup were reported using the correct code. This bug had existed ever since reference exit status codes were introduced. [#70676][#70676] - -

Contributors

- -This release includes 26 merged PRs by 18 authors. - -[#70524]: https://github.com/cockroachdb/cockroach/pull/70524 -[#70569]: https://github.com/cockroachdb/cockroach/pull/70569 -[#70613]: https://github.com/cockroachdb/cockroach/pull/70613 -[#70631]: https://github.com/cockroachdb/cockroach/pull/70631 -[#70650]: https://github.com/cockroachdb/cockroach/pull/70650 -[#70676]: https://github.com/cockroachdb/cockroach/pull/70676 -[#70777]: https://github.com/cockroachdb/cockroach/pull/70777 -[#70791]: https://github.com/cockroachdb/cockroach/pull/70791 -[#70804]: https://github.com/cockroachdb/cockroach/pull/70804 diff --git a/src/current/_includes/releases/v21.2/v21.2.0-beta.4.md b/src/current/_includes/releases/v21.2/v21.2.0-beta.4.md deleted file mode 100644 index 80a0627c6ad..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.0-beta.4.md +++ /dev/null @@ -1,55 +0,0 @@ -## v21.2.0-beta.4 - -Release Date: October 11, 2021 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

{{ site.data.products.enterprise }} edition changes

- -- Fixed a bug that could have led to duplicate instances of a single [changefeed](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) job running for prolonged periods of time. [#70921][#70921] - -

SQL language changes

- -- The [cluster settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings#settings) `sql.telemetry.query_sampling.qps_threshold`, and `sql.telemetry.query_sampling.sample_rate` have been removed. A new setting, `sql.telemetry.query_sampling.max_event_frequency` has been introduced, with a default value of 10 events per second. [#70960][#70960] -- [EXPLAIN ANALYZE (DEBUG)](https://www.cockroachlabs.com/docs/v21.2/explain-analyze) now returns an error for non-system tenants, since we cannot yet support it correctly. [#70949][#70949] - -

Command-line changes

- -- Version details have been added to all [JSON formatted log entries](https://www.cockroachlabs.com/docs/v21.2/log-formats#format-json). Refer to the [reference](https://www.cockroachlabs.com/docs/v21.2/eventlog) for details about the field. [#70450][#70450] - -

DB Console changes

- -- Removed the link to Statement Details on the [Session table](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page) [#70805][#70805] -- A new column, **Interval Start Time (UTC)**, has been added to both the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) tables. The column represents the start time in UTC of the statistics aggregation interval for a statement. By default, the aggregation interval is 1 hour. **Interval Start Time** has been added to the [Statement details page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-details-page). A new query parameter has been added to Statement details page. If the search param `aggregated_ts` is set, it will display the statement details for statements aggregated at that interval. If unset, it will display the statement details for the statement aggregated over the date range. [#70895][#70895] -- The [**Terminate Session** and **Terminate Statement** buttons](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page#session-details) have been temporarily disabled on the Sessions page. [#71014][#71014] -- Updated color, fonts, and spaces on the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page), [Statements Details](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-details-page), [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page), [Transactions Details](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page#transaction-details-page), and [Sessions](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page) pages [#71020][#71020] -- Fixed a bug where the [Clock Offset graph](https://www.cockroachlabs.com/docs/v21.2/ui-runtime-dashboard) rendered incorrectly on nodes with multiple stores. [#70468][#70468] -- Fixed a bug where replicas awaiting to be garbage collected were causing the [Range Report](https://www.cockroachlabs.com/docs/v21.2/ui-debug-pages) page to not load at all due to a JS error. The page will now load and display an empty **Replica Type** while in this state. [#70211][#70211] - -

Bug fixes

- -- The selected app name in the [Statements page of the DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) is now derived from the route parameters. [#71024][#71024] -- Fixed a bug that addresses an issue in Pebble where a key can be dropped from an LSM snapshot if the key was deleted by a range tombstone after the snapshot was acquired. [#70969][#70969] -- The Statement details page in the Cloud console now filters statements by the provided `aggregated_ts` query parameter. [#71081][#71081] -- The [SQL layer](https://www.cockroachlabs.com/docs/v21.2/architecture/sql-layer) no longer panics under memory pressure when the query profiler is enabled. [#71007][#71007] - -

Contributors

- -This release includes 29 merged PRs by 17 authors. - -[#70211]: https://github.com/cockroachdb/cockroach/pull/70211 -[#70450]: https://github.com/cockroachdb/cockroach/pull/70450 -[#70468]: https://github.com/cockroachdb/cockroach/pull/70468 -[#70805]: https://github.com/cockroachdb/cockroach/pull/70805 -[#70895]: https://github.com/cockroachdb/cockroach/pull/70895 -[#70921]: https://github.com/cockroachdb/cockroach/pull/70921 -[#70949]: https://github.com/cockroachdb/cockroach/pull/70949 -[#70960]: https://github.com/cockroachdb/cockroach/pull/70960 -[#70969]: https://github.com/cockroachdb/cockroach/pull/70969 -[#71007]: https://github.com/cockroachdb/cockroach/pull/71007 -[#71009]: https://github.com/cockroachdb/cockroach/pull/71009 -[#71014]: https://github.com/cockroachdb/cockroach/pull/71014 -[#71020]: https://github.com/cockroachdb/cockroach/pull/71020 -[#71024]: https://github.com/cockroachdb/cockroach/pull/71024 -[#71036]: https://github.com/cockroachdb/cockroach/pull/71036 -[#71081]: https://github.com/cockroachdb/cockroach/pull/71081 diff --git a/src/current/_includes/releases/v21.2/v21.2.0-rc.1.md b/src/current/_includes/releases/v21.2/v21.2.0-rc.1.md deleted file mode 100644 index 7f4e1101988..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.0-rc.1.md +++ /dev/null @@ -1,42 +0,0 @@ -## v21.2.0-rc.1 - -Release Date: October 18, 2021 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Security updates

- -- It is no longer possible to use node TLS certificates to establish a SQL connection with any username other than `node`. This had existed as a way for an operator to use the [node certificate](https://www.cockroachlabs.com/docs/v21.2/authentication#using-digital-certificates-with-cockroachdb) to perform operations on behalf of another SQL user. However, this isn't necessary: an operator with access to a node cert can log in as `node` directly and create new credentials for another user. By removing this facility, we tighten the guarantee that the principal in the TLS client cert always matches the SQL identity. [#71188][#71188] -- Multi-tenant SQL servers now reuse the tenant client certificate (`client-tenant.NN.crt`) for SQL-to-SQL communication. Existing deployments must regenerate the certificates with dual purpose (client and server authentication). [#71402][#71402] - -

SQL language changes

- -- SQL tenants will now spill to disk by default when processing large queries, instead of to memory. [#71218][#71218] - -

Command-line changes

- -- `cockroach mt start-sql` will now support the following flags to configure ephemeral storage for SQL when processing large queries: `--store`, `--temp-dir`, and `--max-disk-temp-storage`. [#71218][#71218] -- `cockroach mt start-sql` will now support the `--max-sql-memory` flag to configure maximum SQL memory capacity to store temporary data. [#71276][#71276] - -

DB Console changes

- -- Non-Admin users of the [DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-overview) have regained the ability to view the [Cluster Overview page](https://www.cockroachlabs.com/docs/v21.2/ui-cluster-overview-page). Users without the [Admin role](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#admin-role) will still see most data about their nodes, but information such as command-line arguments, environment variables, and IP addresses and DNS names of nodes will be hidden. [#71383][#71383] - -

Bug fixes

- -- Fixed a bug that caused the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) to erroneously discard [`WHERE` filters](https://www.cockroachlabs.com/docs/v21.2/selection-queries) when executing prepared statements, causing incorrect results to be returned. This bug was present since version [v21.1.9](v21.1.html#v21-1-9). [#71118][#71118] -- In {{ site.data.products.enterprise }} clusters that are [upgraded](https://www.cockroachlabs.com/docs/v21.2/upgrade-cockroach-version) to this version, fixed a bug that prevents [changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) and [backups](https://www.cockroachlabs.com/docs/v21.2/take-full-and-incremental-backups) from being exercised as of a point in time prior to the upgrade. [#71319][#71319] -- Fixed a bug from an earlier v21.2 beta whereby a migration to create the `system.statement_statistics` table was not run. [#71477][#71477] - -

Contributors

- -This release includes 18 merged PRs by 13 authors. - -[#71118]: https://github.com/cockroachdb/cockroach/pull/71118 -[#71188]: https://github.com/cockroachdb/cockroach/pull/71188 -[#71218]: https://github.com/cockroachdb/cockroach/pull/71218 -[#71276]: https://github.com/cockroachdb/cockroach/pull/71276 -[#71319]: https://github.com/cockroachdb/cockroach/pull/71319 -[#71383]: https://github.com/cockroachdb/cockroach/pull/71383 -[#71402]: https://github.com/cockroachdb/cockroach/pull/71402 -[#71477]: https://github.com/cockroachdb/cockroach/pull/71477 diff --git a/src/current/_includes/releases/v21.2/v21.2.0-rc.2.md b/src/current/_includes/releases/v21.2/v21.2.0-rc.2.md deleted file mode 100644 index 6b35965e4c3..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.0-rc.2.md +++ /dev/null @@ -1,19 +0,0 @@ -## v21.2.0-rc.2 - -Release Date: October 25, 2021 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Bug fixes

- -- The [**Transaction** page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) no longer crashes when a statement is not found. [#71599][#71599] -- Fixed certificate bundle building logic. [#71593][#71593] -- Fixed an internal error with [joins](https://www.cockroachlabs.com/docs/v21.2/joins) that are both `LATERAL` and `NATURAL`/`USING`. [#70801][#70801] - -

Contributors

- -This release includes 5 merged PRs by 5 authors. - -[#70801]: https://github.com/cockroachdb/cockroach/pull/70801 -[#71593]: https://github.com/cockroachdb/cockroach/pull/71593 -[#71599]: https://github.com/cockroachdb/cockroach/pull/71599 diff --git a/src/current/_includes/releases/v21.2/v21.2.0-rc.3.md b/src/current/_includes/releases/v21.2/v21.2.0-rc.3.md deleted file mode 100644 index 154943d4b24..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.0-rc.3.md +++ /dev/null @@ -1,23 +0,0 @@ -## v21.2.0-rc.3 - -Release Date: November 1, 2021 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

Bug fixes

- -- Previously, CockroachDB could incorrectly read the data of a unique secondary index that used to be a primary index created by an [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v21.2/alter-primary-key) command in v21.1.x or prior versions. This is now fixed. [#71586][#71586] -- Previously, CockroachDB could crash if network connectivity was impaired. The stack trace (in `cockroach-stderr.log`) would contain `server.(*statusServer).NodesUI` in that case. This is now fixed. [#71756][#71756] -- A bug has been fixed which caused incorrect results for some queries that utilized a zig-zag join. The bug could only reproduce on tables with at least two multi-column indexes with nullable columns. The bug was present since v19.2.0. [#71824][#71824] -- Fixed a rare deadlock on system ranges that could happen when an internal transaction [`COMMIT`](https://www.cockroachlabs.com/docs/v21.2/commit-transaction)/[`ROLLBACK`](https://www.cockroachlabs.com/docs/v21.2/rollback-transaction) that was a no-op (did not make any writes) triggered gossip data propagation. [#71978][#71978] -- Previously, some instances of a broken client connection could cause an infinite loop while processing commands from the client. This is now fixed. [#72004][#72004] - -

Contributors

- -This release includes 10 merged PRs by 6 authors. - -[#71586]: https://github.com/cockroachdb/cockroach/pull/71586 -[#71756]: https://github.com/cockroachdb/cockroach/pull/71756 -[#71824]: https://github.com/cockroachdb/cockroach/pull/71824 -[#71978]: https://github.com/cockroachdb/cockroach/pull/71978 -[#72004]: https://github.com/cockroachdb/cockroach/pull/72004 diff --git a/src/current/_includes/releases/v21.2/v21.2.0.md b/src/current/_includes/releases/v21.2/v21.2.0.md deleted file mode 100644 index abd77ac5493..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.0.md +++ /dev/null @@ -1,124 +0,0 @@ -## v21.2.0 - -Release Date: November 16, 2021 - -With the release of CockroachDB v21.2, we've made a variety of management, performance, and compatibility improvements. Check out a [summary of the most significant user-facing changes](#v21-2-0-feature-summary) and then [upgrade to CockroachDB v21.2](https://www.cockroachlabs.com/docs/v21.2/upgrade-cockroach-version). - -To learn more: - -- Read the [v21.2 blog post](https://www.cockroachlabs.com/blog/cockroachdb-21-2-release/). -- Watch the [live demo and Q&A session](https://www.cockroachlabs.com/webinars/cockroachdb-21-2-release-na/) recorded on Tuesday, December 7. - -{{site.data.alerts.callout_danger}} -During an upgrade of a CockroachDB cluster from v21.1.x → v21.2.0, [backups](https://www.cockroachlabs.com/docs/v21.2/take-full-and-incremental-backups) will fail until the upgrade is [finalized](https://www.cockroachlabs.com/docs/v21.2/upgrade-cockroach-version#step-3-decide-how-the-upgrade-will-be-finalized). After the upgrade is complete and finalized, backups will continue as normal. - -This issue will only occur if the upgrade coincides with a backup. For small clusters, where the upgrade is quick, there may be no overlap, and you will not experience this issue. - -For more information, including mitigation, see [Technical Advisory 72389](https://www.cockroachlabs.com/docs/advisories/a72839). -{{site.data.alerts.end}} - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

CockroachDB {{ site.data.products.cloud }}

- -- Get a free v21.2 cluster on CockroachDB {{ site.data.products.serverless }}. -- Learn about recent updates to CockroachDB {{ site.data.products.cloud }} in the [CockroachDB {{ site.data.products.cloud }} Release Notes]({% link releases/cloud.md %}). - -

Feature summary

- -This section summarizes the most significant user-facing changes in v21.2.0. For a complete list of features and changes, including bug fixes and performance improvements, see the [release notes]({% link releases/index.md %}#testing-releases) for previous testing releases. You can also search for [what's new in v21.2 in our docs](https://www.cockroachlabs.com/docs/search?query=new+in+v21.2). - -{{site.data.alerts.callout_info}} -"Core" features are freely available in the core version of CockroachDB and do not require an enterprise license. "Enterprise" features require an [enterprise license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). [CockroachDB {{ site.data.products.cloud }} clusters](https://cockroachlabs.cloud/) include all enterprise features. You can also use [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) to test enterprise features in a local, temporary cluster. -{{site.data.alerts.end}} - -- [SQL](#v21-2-0-sql) -- [Recovery and I/O](#v21-2-0-recovery-and-i-o) -- [Database operations](#v21-2-0-database-operations) -- [Backward-incompatible changes](#v21-2-0-backward-incompatible-changes) -- [Deprecations](#v21-2-0-deprecations) -- [Known limitations](#v21-2-0-known-limitations) -- [Education](#v21-2-0-education) - - - -

SQL

- -Version | Feature | Description ------------+--------------------------------+------------------------------ -Enterprise | **Multi-region observability** | You can now surface region information by using the [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.2/explain-analyze) statements. This information is also now available in the [DB Console](#v21-2-0-multi-region-db-console). -Enterprise | **Restricted and default placement** | You can now use the [`ALTER DATABASE ... PLACEMENT RESTRICTED`](https://www.cockroachlabs.com/docs/v21.2/placement-restricted) statement to constrain the replica placement for a [multi-region database](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview)'s [regional tables](https://www.cockroachlabs.com/docs/v21.2/regional-tables) to the [home regions](https://www.cockroachlabs.com/docs/v21.2/set-locality#crdb_region) associated with those tables. -Enterprise | **Bounded staleness reads** | [Bounded staleness reads](https://www.cockroachlabs.com/docs/v21.2/follower-reads#bounded-staleness-reads) are now available in CockroachDB. These use a dynamic, system-determined timestamp to minimize staleness while being more tolerant to replication lag than exact staleness reads. This dynamic timestamp is returned by the `with_min_timestamp()` or `with_max_staleness()` [functions](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators). In addition, bounded staleness reads provide the ability to serve reads from local replicas even in the presence of [network partitions](https://www.cockroachlabs.com/docs/v21.2/cluster-setup-troubleshooting#network-partition) or other failures. -Core | **Privilege inheritance** | CockroachDB's model for inheritance of privileges that cascade from schema objects now matches PostgreSQL. Added support for [`ALTER DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v21.2/alter-default-privileges) and [`SHOW DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v21.2/show-default-privileges). -Core | **`ON UPDATE` expressions** | An [`ON UPDATE` expression](https://www.cockroachlabs.com/docs/v21.2/add-column#add-a-column-with-an-on-update-expression) can now be added to a column to update column values when an [`UPDATE`](https://www.cockroachlabs.com/docs/v21.2/update) or [`UPSERT`](https://www.cockroachlabs.com/docs/v21.2/upsert) statement modifies a different column value in the same row, or when an `ON UPDATE CASCADE` expression on a different column modifies an existing value in the same row. -Core | **More granular controls for session variables** | There are now more ways to control CockroachDB's behavior through [session variables](https://www.cockroachlabs.com/docs/v21.2/set-vars). You can now set user or role-level defaults by using the [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v21.2/alter-role) statement. CockroachDB also now supports setting session variables for the duration of a single transaction, using [the `LOCAL` keyword](https://www.cockroachlabs.com/docs/v21.2/set-vars#set-local). -Core | **Transaction guardrails** | [Transaction guardrails](https://www.cockroachlabs.com/docs/v21.2/transactions#limit-the-number-of-rows-written-or-read-in-a-transaction) have been added to CockroachDB to improve production stability. These can help prevent cluster unavailability and protect the cluster against multiple developer workloads with problematic SQL statements. -Core | **Expression indexes** | [Indexes on expressions](https://www.cockroachlabs.com/docs/v21.2/expression-indexes) can now be created. These indexes speed up queries that filter on the result of that expression, and are especially useful for indexing only a specific field of a `JSON` object. -Core | **Correlated CTEs** | [Correlated common table expressions](https://www.cockroachlabs.com/docs/v21.2/common-table-expressions#correlated-common-table-expressions) (CTEs) are now supported in CockroachDB. A correlated CTE is a common table expression that is contained in a subquery and references columns defined outside of the subquery. -Core | **Admission control** | A new [admission control system](https://www.cockroachlabs.com/docs/v21.2/architecture/admission-control) has been added. CockroachDB implements this optional admission control system to maintain cluster performance and availability when some nodes experience high load. Admission control is disabled by default.

Additionally, an [**Overload** dashboard](https://www.cockroachlabs.com/docs/v21.2/ui-overload-dashboard) has been added to the DB Console. Use this dashboard to monitor the performance of the parts of your cluster relevant to the cluster's [admission control system](https://www.cockroachlabs.com/docs/v21.2/architecture/admission-control). This includes CPU usage, the runnable goroutines waiting per CPU, the health of the persistent stores, and the performance of admission control system when it is enabled. -Core | **Persistent statement and transaction statistics** | Statistics information on the [**Statements**](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages within the DB Console can now be persisted for longer than one hour. -Core | **Index usage statistics** | Index usage statistics are now supported for CockroachDB to help [identify unused indexes](https://www.cockroachlabs.com/docs/v21.2/performance-recipes#slow-writes) causing unnecessary performance overhead for your workload. Index read statistics are available in the [`crdb_internal` schema](https://www.cockroachlabs.com/docs/v21.2/crdb-internal#tables) for programmatic access using SQL. -Core | **Third-party tool support** | [Sequelize](https://www.cockroachlabs.com/docs/v21.2/build-a-nodejs-app-with-cockroachdb-sequelize), a Javascript object-relational mapper (ORM), and [Alembic](https://www.cockroachlabs.com/docs/v21.2/alembic), a schema migration tool for SQLAlchemy users, are now fully supported. We have also improved testing for [PgBouncer](https://dzone.com/articles/using-pgbouncer-with-cockroachdb), an external connection pooler for PostgreSQL. -Core | **Contention views** | You can now use pre-built contention views in [`crdb_internal`](https://www.cockroachlabs.com/docs/v21.2/crdb-internal#tables) to quickly identify the top contending indexes. These views can be used to [understand where and avoid contention](https://www.cockroachlabs.com/docs/v21.2/performance-best-practices-overview#understanding-and-avoiding-transaction-contention) happening in your workload. - -

Recovery and I/O

- -Version | Feature | Description ------------+-----------------------------------------------+------------ -{{ site.data.products.enterprise }} | **`BACKUP` / `RESTORE` scalability** | [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup) and [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) performance has been improved for larger data volumes, more frequent backups, and clusters with more or larger nodes. -{{ site.data.products.enterprise }} | **Webhook changefeed sink** | You can now stream individual [changefeed](https://www.cockroachlabs.com/docs/v21.2/create-changefeed) messages as webhook messages to a newly supported [`webhook-https` sink](https://www.cockroachlabs.com/docs/v21.2/changefeed-sinks#webhook-sink). The webhook sink is a flexible, general-purpose sink solution that does not require managing a Kafka cluster or cloud storage sink. -{{ site.data.products.enterprise }} | **Multi-region bulk operations improvements** | The following bulk operations are now supported:
  • [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) on [`REGIONAL BY TABLE`](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview#regional-tables) and [`GLOBAL`](https://www.cockroachlabs.com/docs/v21.2/set-locality#global) tables is supported with some limitations. For more details, see [Restoring to multi-region databases](https://www.cockroachlabs.com/docs/v21.2/restore#restoring-to-multi-region-databases).
  • [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) now supports importing into [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.2/set-locality#regional-by-row) tables.
-{{ site.data.products.enterprise }} | **Changefeeds for regional by row tables** | [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/create-changefeed) are now supported on [regional by row tables](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview#regional-by-row-tables). -{{ site.data.products.enterprise }} | **Changefeed observability** | You can now display specific fields related to changefeed jobs by running [`SHOW CHANGEFEED JOBS`](https://www.cockroachlabs.com/docs/v21.2/show-jobs#show-changefeed-jobs). - -

Database operations

- -Version | Feature | Description ------------+----------------------------------------+------------ -Enterprise | **Kubernetes Operator on Amazon EKS** | The Kubernetes Operator is now supported on [Amazon EKS (Elastic Kubernetes Service)](https://www.cockroachlabs.com/docs/v21.2/deploy-cockroachdb-with-kubernetes#hosted-eks). -Enterprise | **Extend the Kubernetes Operator API** | The [Kubernetes Operator API has been extended](https://www.cockroachlabs.com/docs/v21.2/schedule-cockroachdb-kubernetes) to a state where it can support the various types of single-region deployments the Helm chart currently supports. This includes:
  • Node affinity
  • Pod affinity and anti-affinity
  • Taints and tolerations
  • Custom labels and annotations
    • -Enterprise | **Multi-region in the DB Console** | The DB Console now surfaces multi-region information to provide observability into global databases and their workloads. You can view multi-region details on the [**Databases**](https://www.cockroachlabs.com/docs/v21.2/ui-databases-page), [**Statements**](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page), and [**Transactions**](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages. -Core | **Automatic ballast files** | CockroachDB now automatically creates an emergency ballast file at startup time. The `cockroach debug ballast` command is still available but deprecated. For more information about how automatic ballast file creation works, see [automatic ballast files](https://www.cockroachlabs.com/docs/v21.2/cluster-setup-troubleshooting#automatic-ballast-files). - -

      Backward-incompatible changes

      - -Before [upgrading to CockroachDB v21.2](https://www.cockroachlabs.com/docs/v21.2/upgrade-cockroach-version), be sure to review the following backward-incompatible changes and adjust your deployment as necessary. - -- Interleaved tables and interleaved indexes have been removed. Before upgrading to v21.2, [convert interleaved tables](https://www.cockroachlabs.com/docs/v21.1/interleave-in-parent#convert-interleaved-tables) and [replace interleaved indexes](https://www.cockroachlabs.com/docs/v21.1/interleave-in-parent#replace-interleaved-indexes). Clusters with interleaved tables and indexes cannot finalize the v21.2 upgrade. -- Previously, CockroachDB only supported the YMD format for parsing timestamps from strings. It now also supports the MDY format to better align with PostgreSQL. A timestamp such as `1-1-18`, which was previously interpreted as `2001-01-18`, will now be interpreted as `2018-01-01`. To continue interpreting the timestamp in the YMD format, the first number can be represented with 4 digits, `2001-1-18`. -- The deprecated [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `cloudstorage.gs.default.key` has been removed, and the behavior of the `AUTH` parameter in Google Cloud Storage [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup) and `IMPORT` URIs has been changed. The default behavior is now that of `AUTH=specified`, which uses the credentials passed in the `CREDENTIALS` parameter, and the previous default behavior of using the node's implicit access (via its machine account or role) now requires explicitly passing `AUTH=implicit`. -- Switched types from `TEXT` to `"char"` for compatibility with PostgreSQL in the following columns: `pg_constraint` (`confdeltype`, `confmatchtype`, `confudptype`, `contype`) `pg_operator` (`oprkind`), `pg_prog` (`proargmodes`), `pg_rewrite` (`ev_enabled`, `ev_type`), and `pg_trigger` (`tgenabled`). - -

      Deprecations

      - -- The `kv.closed_timestamp.closed_fraction` and `kv.follower_read.target_multiple` [settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) are now deprecated and turned into no-ops. They had already stopped controlling the closing of timestamps in v21.1, but were still influencing the [`follower_read_timestamp()`](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators) computation for a timestamp that is likely to be closed on all followers. To replace them, a simpler `kv.closed_timestamp.propagation_slack` setting is introduced, modeling the delay between when a leaseholder closes a timestamp and when all the followers become aware of it (defaults conservatively to 1s). `follower_read_timestamp()` is now computed as `kv.closed_timestamp.target_duration` + `kv.closed_timestamp.side_transport_interval` + `kv.closed_timestamp.propagation_slack`, which defaults to 4.2s (instead of the previous default of 4.8s). -- Because the [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) database privilege is being deprecated, CockroachDB now additionally checks for the [`CONNECT`](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) privilege on the database to allow for backing up the database. Existing users with `SELECT` on the database can still back up the database, but it is now recommended to [`GRANT`](https://www.cockroachlabs.com/docs/v21.2/grant) `CONNECT` on the database. -- [`IMPORT TABLE`](https://www.cockroachlabs.com/docs/v21.2/import) will be deprecated in v21.2 and removed in a future release. Users should create a table using [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.2/create-table) and then [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) the newly created table. -- Granting `SELECT`, `UPDATE`, `INSERT`, and `DELETE` on databases is being deprecated. The syntax is still supported, but is automatically converted to the equivalent `ALTER DEFAULT PRIVILEGES FOR ALL ROLES` command. The user is given a notice that the privilege is incompatible and automatically being converted to an `ALTER DEFAULT PRIVILEGE FOR ALL ROLES` command. - -

      Known limitations

      - -For information about new and unresolved limitations in CockroachDB v21.2, with suggested workarounds where applicable, see [Known Limitations](https://www.cockroachlabs.com/docs/v21.2/known-limitations). - -

      Education

      - -Area | Topic | Description ----------------------+---------------------------+------------ -Cockroach University | **New Serverless course** | [Introduction to Serverless Databases and CockroachDB {{ site.data.products.serverless }}](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-serverless+self-paced/about) teaches you the core concepts behind serverless databases and gives you the tools you need to get started with CockroachDB {{ site.data.products.serverless }}. -Cockroach University | **New Schema Design Course** | [Foundations of Schema Design in CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+foundations-schema-design-cockroachdb+self-paced/about) teaches you CockroachDB's rich data types and the best practices and anti-patterns to consider when designing schema for CockroachDB. -Cockroach University | **New Node.js Course** | [Fundamentals of CockroachDB for Node.js Developers](https://university.cockroachlabs.com/courses/course-v1:crl+fundamentals-of-crdb-for-nodejs-devs+self-paced/about) guides you through building a full-stack vehicle-sharing app in Typescript using Node.js with TypeORM and a
      CockroachCloud Free cluster as the backend. -Docs | **CockroachDB Cloud Guidance** | Added Node.js, Go, Python, and Java sample app code and connection guidance to the [CockroachDB {{ site.data.products.serverless }} Quickstart](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart), as well as docs explaining the [CockroachDB {{ site.data.products.serverless }} Architecture](https://www.cockroachlabs.com/docs/cockroachcloud/architecture), important concepts for [planning/managing a Serverless cluster](https://www.cockroachlabs.com/docs/cockroachcloud/plan-your-cluster) (e.g., request units, cluster scaling), and how to run [customer-owned backups on CockroachDB {{ site.data.products.dedicated }} and CockroachDB {{ site.data.products.serverless }} clusters](https://www.cockroachlabs.com/docs/cockroachcloud/take-and-restore-self-managed-backups). -Docs | **Multi-Region Guidance** | Added docs on [transitioning to the new multi-region SQL abstractions](https://www.cockroachlabs.com/docs/v21.2/migrate-to-multiregion-sql) from the legacy zone-configuration-based workflows, and on [data domiciling in multi-region clusters](https://www.cockroachlabs.com/docs/v21.2/data-domiciling). -Docs | **Performance Tuning Recipes** | Added [solutions for common performance issues](https://www.cockroachlabs.com/docs/v21.2/performance-recipes). -Docs | **New Developer Tutorials** | Added tutorials on [using Google Cloud Run](https://www.cockroachlabs.com/docs/v21.2/deploy-app-gcr) to deploy a containerized Django application and [using the Alembic schema migration module](https://www.cockroachlabs.com/docs/v21.2/alembic) with a simple Python application. -Docs | **Changefeed Tuning Guidance** | Added guidance on [tuning changefeeds](https://www.cockroachlabs.com/docs/v21.2/advanced-changefeed-configuration) for high-durability delivery, high throughput, and Kafka sinks. -Docs | **Sample App Specifications** | Added a [`README`](https://github.com/cockroachdb/docs/blob/master/SAMPLE_APP_SPEC.md) with specifications for future sample apps built by external partners or contributors. -Docs | **Disk Stall Troubleshooting** | Added docs explaining the [symptoms, causes, and mitigations for disk stalls](https://www.cockroachlabs.com/docs/v21.2/cluster-setup-troubleshooting#disk-stalls). -Docs | **Network Logging with Fluentd** | Added an example configuration for [network logging with Fluentd](https://www.cockroachlabs.com/docs/v21.2/logging-use-cases#network-logging). diff --git a/src/current/_includes/releases/v21.2/v21.2.1.md b/src/current/_includes/releases/v21.2/v21.2.1.md deleted file mode 100644 index 37ec09e8093..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.1.md +++ /dev/null @@ -1,25 +0,0 @@ -## v21.2.1 - -Release Date: November 29, 2021 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Bug fixes

      - -- The timeout check of Raft application during upgrade migrations has been increased from 5 seconds to 1 minute and is now controllable via the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `kv.migration.migrate_application.timeout`. This makes migrations less likely to fail in clusters with ongoing rebalancing activity during upgrade migrations. [#73061][#73061] -- Fixed a bug where [`BACKUP ... with revision_history`](https://www.cockroachlabs.com/docs/v21.2/take-backups-with-revision-history-and-restore-from-a-point-in-time) would fail on an upgraded, but un-finalized cluster. It will now succeed. [#73050][#73050] -- Fixed a bug that could cause some semi [lookup joins](https://www.cockroachlabs.com/docs/v21.2/joins#lookup-joins) on [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview#regional-by-row-tables) tables to return early before finding all data. This bug is currently only present in [v21.2.0](v21.2.html#v21-2-0). This problem only manifested if there was an `ON` condition on top of the equality condition used for the lookup join, and the lookup columns did not form a key in the index being looked up. The primary impact of this issue for most users relates to uniqueness checks on mutations of `REGIONAL BY ROW` tables, since uniqueness checks are implemented with a semi lookup join with an `ON` condition. The result of this bug was that uniqueness checks were not comprehensive, and could miss an existing duplicate key on a remote node. This could cause data to be erroneously inserted with a duplicate key when it should have failed the uniqueness check. [#73063][#73063] -- [Backups](https://www.cockroachlabs.com/docs/v21.2/take-full-and-incremental-backups) taken while a cluster contains a mix of v21.2 and v21.1 nodes may fail. [Upgrading](https://www.cockroachlabs.com/docs/v21.2/upgrade-cockroach-version) the entire cluster to v21.2 should resolve the issues. The [technical advisory 72839](https://www.cockroachlabs.com/docs/advisories/a72839) provides more information about possible [remediations](https://www.cockroachlabs.com/docs/advisories/a72839#mitigation). The error returned after a backup failure in this case now also directs the user to the technical advisory. [#72880][#72880] -- Fixed a bug that caused a [full-cluster backup](https://www.cockroachlabs.com/docs/v21.2/backup#backup-a-cluster) to fail while upgrading from v21.1 to v21.2. This caused an error, because the `system.tenant_usage` table, which is present in v21.2, is not present in v21.1. [#72840][#72840] -- Fixed a bug where cluster backups were backing up opt-out system tables unexpectedly. [#71368][#71368] - -

      Contributors

      - -This release includes 6 merged PRs by 6 authors. - -[#73061]: https://github.com/cockroachdb/cockroach/pull/73061 -[#73050]: https://github.com/cockroachdb/cockroach/pull/73050 -[#73063]: https://github.com/cockroachdb/cockroach/pull/73063 -[#71368]: https://github.com/cockroachdb/cockroach/pull/71368 -[#72840]: https://github.com/cockroachdb/cockroach/pull/72840 -[#72880]: https://github.com/cockroachdb/cockroach/pull/72880 diff --git a/src/current/_includes/releases/v21.2/v21.2.10.md b/src/current/_includes/releases/v21.2/v21.2.10.md deleted file mode 100644 index ea23f8a81aa..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.10.md +++ /dev/null @@ -1,75 +0,0 @@ -## v21.2.10 - -Release Date: May 2, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Security updates

      - -- The `crdb_internal.reset_sql_stats()` and `crdb_internal.reset_index_usage_stats()` [built-in functions](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#system-info-functions) now check if user has the [admin role](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#admin-role). [#80277][#80277] - -

      Enterprise edition changes

      - -- Added a `changefeed.backfill.scan_request_size` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) to control scan request size during [backfill](https://www.cockroachlabs.com/docs/v21.2/use-changefeeds#schema-changes-with-column-backfill). [#79709][#79709] - -

      SQL language changes

      - -- A `pgerror` with code `22P02` is now returned when an invalid cast to `OID` is made. [#79849][#79849] -- An incremental [backup](https://www.cockroachlabs.com/docs/v21.2/backup) now fails if the `AS OF SYSTEM TIME` is less than the previous backup's end time. [#80287][#80287] - -

      DB Console changes

      - -- Filtering by column is added to [**Hot Ranges**](https://www.cockroachlabs.com/docs/v21.2/ui-hot-ranges-page) page. [#79645][#79645] -- Added dropdown filter on the Node Diagnostics page to view by active, decommissioned, or all nodes. [#80336][#80336] - -

      Bug fixes

      - -- The execution time as reported on `DISTSQL` diagrams within the statement bundle collected via [`EXPLAIN ANALYZE (DEBUG)`](https://www.cockroachlabs.com/docs/v21.2/explain-analyze#explain-analyze-debug) is no longer negative when the statement encountered an error. [#79369][#79369] -- An internal error when the inner expression of a column access expression evaluated to `NULL` no longer occurs. For example, evaluation of the expression `(CASE WHEN b THEN ((ROW(1) AS a)) ELSE NULL END).a` would error when `b` is `false`. [#79528][#79528] -- An error when accessing a named column of a labelled tuple no longer occurs. The error occurred when an expression could produce one of several different tuples. For example, `(CASE WHEN true THEN (ROW(1) AS a) ELSE (ROW(2) AS a) END).a` would fail to evaluate. [#79528][#79528] -- [Pebble](https://www.cockroachlabs.com/docs/v21.2/architecture/storage-layer#pebble) compaction heuristics no longer allow a large compaction backlog to accumulate, eventually triggering high read amplification. [#79611][#79611] -- HTTP 304 responses no longer result in error logs. [#79860][#79860] -- A custom time series metric `sql.distsql.queries.spilled` is no longer computed incorrectly leading to an exaggerated number. [#79881][#79881] -- [`nextval` and `setval`](https://www.cockroachlabs.com/docs/v21.2/create-sequence#sequence-functions) are non-transactional except when they is called in the same transaction that the sequence was created in. Creating a sequence and calling `nextval` or `setval` on it within a transaction no longer causes the query containing `nextval` to hang. [#79866][#79866] -- The [SQL Activity](https://www.cockroachlabs.com/docs/v21.2/ui-overview#sql-activity) page no longer returns a "descriptor not found" error in a v21.1-v21.2 mixed version state. [#79795][#79795] -- [Resetting SQL statistics](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-statistics) in v21.1-v21.2 mixed version state no longer causes a "descriptor not found" error. [#79795][#79795] -- In a v21.1-v21.2 mixed version state CockroachDB no longer attempts to flush statistics to disk. It also does not log a "descriptor not found" error messages. [#79795][#79795] -- Added a detailed error message for index out of bounds when decoding a [binary tuple](https://www.cockroachlabs.com/docs/v21.2/scalar-expressions#tuple-constructors) datum. [#79963][#79963] -- CockroachDB no longer encounters an internal error when evaluating queries with [`OFFSET`](https://www.cockroachlabs.com/docs/v21.2/limit-offset) and [`LIMIT`](https://www.cockroachlabs.com/docs/v21.2/limit-offset) clauses when the addition of the `offset` and the `limit` value would be larger than `int64` range. [#79924][#79924] -- Automatic [encryption-at-rest data key rotation](https://www.cockroachlabs.com/docs/v21.2/security-reference/encryption#encryption-at-rest-enterprise) is no longer disabled after a node restart without a store key rotation. [#80170][#80170] -- When using `ST_Intersects`, `ST_Within`, or `ST_Covers` [spatial functions](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#spatial-functions), `NaN` coordinates no longer return `true` for point in polygon operations. [#80201][#80201] -- [`ST_MinimumBoundingCircle`](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#spatial-functions) no longer panics with infinite coordinates and a `num_segs` argument. [#80346][#80346] -- The formatting/printing behavior for [`ALTER DEFAULT PRIVILEGES`](https://www.cockroachlabs.com/docs/v21.2/alter-default-privileges) was fixed, which corrects some mistaken error messages. [#80326][#80326] -- Bulk data sent to the [KV storage layer](https://www.cockroachlabs.com/docs/v21.2/architecture/storage-layer) is now sent at reduced [admission control](https://www.cockroachlabs.com/docs/v21.2/architecture/admission-control) priority. [#80387][#80387] - -

      Performance improvements

      - -- Rollback of [`CREATE TABLE AS`](https://www.cockroachlabs.com/docs/v21.2/create-table-as) with large quantities of data now has similar performance to [`DROP TABLE`](https://www.cockroachlabs.com/docs/v21.2/drop-table). [#79603][#79603] - -

      Contributors

      - -This release includes 38 merged PRs by 26 authors. - -[#78639]: https://github.com/cockroachdb/cockroach/pull/78639 -[#79369]: https://github.com/cockroachdb/cockroach/pull/79369 -[#79528]: https://github.com/cockroachdb/cockroach/pull/79528 -[#79603]: https://github.com/cockroachdb/cockroach/pull/79603 -[#79611]: https://github.com/cockroachdb/cockroach/pull/79611 -[#79645]: https://github.com/cockroachdb/cockroach/pull/79645 -[#79709]: https://github.com/cockroachdb/cockroach/pull/79709 -[#79718]: https://github.com/cockroachdb/cockroach/pull/79718 -[#79795]: https://github.com/cockroachdb/cockroach/pull/79795 -[#79849]: https://github.com/cockroachdb/cockroach/pull/79849 -[#79860]: https://github.com/cockroachdb/cockroach/pull/79860 -[#79866]: https://github.com/cockroachdb/cockroach/pull/79866 -[#79881]: https://github.com/cockroachdb/cockroach/pull/79881 -[#79924]: https://github.com/cockroachdb/cockroach/pull/79924 -[#79963]: https://github.com/cockroachdb/cockroach/pull/79963 -[#80170]: https://github.com/cockroachdb/cockroach/pull/80170 -[#80201]: https://github.com/cockroachdb/cockroach/pull/80201 -[#80277]: https://github.com/cockroachdb/cockroach/pull/80277 -[#80287]: https://github.com/cockroachdb/cockroach/pull/80287 -[#80326]: https://github.com/cockroachdb/cockroach/pull/80326 -[#80336]: https://github.com/cockroachdb/cockroach/pull/80336 -[#80346]: https://github.com/cockroachdb/cockroach/pull/80346 -[#80387]: https://github.com/cockroachdb/cockroach/pull/80387 diff --git a/src/current/_includes/releases/v21.2/v21.2.11.md b/src/current/_includes/releases/v21.2/v21.2.11.md deleted file mode 100644 index 24b0406fa7f..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.11.md +++ /dev/null @@ -1,47 +0,0 @@ -## v21.2.11 - -Release Date: May 23, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Enterprise edition changes

      - -- Fixed a bug where [backups](https://www.cockroachlabs.com/docs/v21.2/take-full-and-incremental-backups) in the base directory of a Google Storage bucket would not be discovered by `SHOW BACKUPS`. These backups will now appear correctly. [#80510][#80510] - -

      SQL language changes

      - -- When using Azure Cloud Storage for data operations, CockroachDB now calculates the Storage Account URL from the provided `AZURE_ENVIRONMENT` query parameter. This defaults to `AzurePublicCloud` if not specified to maintain backwards compatibility. This parameter should not be used when the cluster is in a mixed version or upgrading state, as nodes that have not been upgraded will continue to send requests to the `AzurePublicCloud` even in the presence of this parameter. [#80800][#80800] -- Added a new [session variable](https://www.cockroachlabs.com/docs/v21.2/set-vars#supported-variables), `enable_multiple_modifications_of_table`, which can be used instead of the cluster variable `sql.multiple_modifications_of_table.enabled` to allow statements containing multiple [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v21.2/insert#on-conflict-clause), [`UPSERT`](https://www.cockroachlabs.com/docs/v21.2/upsert), [`UPDATE`](https://www.cockroachlabs.com/docs/v21.2/update), or [`DELETE`](https://www.cockroachlabs.com/docs/v21.2/delete) subqueries to modify the same table. The underlying issue [#70731](https://github.com/cockroachdb/cockroach/issues/70731) is not fixed: table corruption remains possible if the same row is modified multiple times by different subqueries of a single statement, regardless of the value of the new `enable_multiple_modifications_of_table` session variable or the existing `sql.multiple_modifications_of_table.enabled` cluster variable. Cockroach Labs recommends rewriting these statements, but the session variable is provided as an aid if this is not possible. [#81016][#81016] - -

      DB Console changes

      - -- Added an alert banner on the [Overview](https://www.cockroachlabs.com/docs/v21.2/ui-overview) list page to warn users when staggered node versions are detected in a cluster. [#80742][#80742] - -

      Bug fixes

      - -- Fixed a rare crash which can occur when [restarting a node](https://www.cockroachlabs.com/docs/v21.2/cockroach-start) after [dropping tables](https://www.cockroachlabs.com/docs/v21.2/drop-table). [#80571][#80571] -- Fixed a bug where in very rare circumstances CockroachDB could incorrectly evaluate queries with [`ORDER BY`](https://www.cockroachlabs.com/docs/v21.2/order-by) clauses when the prefix of ordering was already provided by the index ordering of the scanned table. [#80731][#80731] -- The list of recently decommissioned nodes and the historical list of decommissioned nodes now correctly displays decommissioned nodes. [#80747][#80747] -- Fixed a goroutine leak when internal [rangefeed](https://www.cockroachlabs.com/docs/v21.2/use-changefeeds#enable-rangefeeds) clients received certain kinds of retryable errors. [#80797][#80797] -- Fixed a bug in which some prepared statements could result in incorrect results when executed. This could occur when the prepared statement included an equality comparison between an [index column](https://www.cockroachlabs.com/docs/v21.2/schema-design-indexes) and a placeholder, and the placeholder was cast to a [type](https://www.cockroachlabs.com/docs/v21.2/data-types) that was different from the column type. For example, if column a was of type [`DECIMAL`](https://www.cockroachlabs.com/docs/v21.2/decimal), the following prepared query could produce incorrect results when executed: `SELECT * FROM t_dec WHERE a = $1::INT8;` [#81364][#81364] - -
      - -

      Contributors

      - -This release includes 22 merged PRs by 20 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Nathan Lowe (first-time contributor) - -
      - -[#80510]: https://github.com/cockroachdb/cockroach/pull/80510 -[#80571]: https://github.com/cockroachdb/cockroach/pull/80571 -[#80731]: https://github.com/cockroachdb/cockroach/pull/80731 -[#80742]: https://github.com/cockroachdb/cockroach/pull/80742 -[#80747]: https://github.com/cockroachdb/cockroach/pull/80747 -[#80797]: https://github.com/cockroachdb/cockroach/pull/80797 -[#80800]: https://github.com/cockroachdb/cockroach/pull/80800 -[#81016]: https://github.com/cockroachdb/cockroach/pull/81016 -[#81364]: https://github.com/cockroachdb/cockroach/pull/81364 diff --git a/src/current/_includes/releases/v21.2/v21.2.12.md b/src/current/_includes/releases/v21.2/v21.2.12.md deleted file mode 100644 index 893f5696d04..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.12.md +++ /dev/null @@ -1,16 +0,0 @@ -## v21.2.12 - -Release Date: June 6, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Miscellaneous

      - -* Fixed an internal logging bug that affected the stability of the server. [#4f97a101e][#4f97a101e] - -

      Contributors

      - -This release includes 2 commits by 2 authors. - -[#4d7c8f356]: https://github.com/cockroachdb/cockroach/commit/4d7c8f356 -[#4f97a101e]: https://github.com/cockroachdb/cockroach/commit/4f97a101e diff --git a/src/current/_includes/releases/v21.2/v21.2.13.md b/src/current/_includes/releases/v21.2/v21.2.13.md deleted file mode 100644 index 4b461cc7939..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.13.md +++ /dev/null @@ -1,77 +0,0 @@ -## v21.2.13 - -Release Date: July 5, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      SQL language changes

      - -- Fixed a small typo when using [DateStyle and IntervalStyle](https://www.cockroachlabs.com/docs/v21.2/set-vars#supported-variables). [#81547][#81547] -- Implemented the [`COPY FROM ... ESCAPE ...`](https://www.cockroachlabs.com/docs/v21.2/copy-from) syntax. [#82636][#82636] -- Fixed an issue where [`SHOW BACKUP with privileges`](https://www.cockroachlabs.com/docs/v21.2/show-backup#show-a-backup-with-privileges) output grant statements with incorrect syntax (specifically, without the object type). For example, previously `SHOW BACKUP with privileges` output: `GRANT ALL ON status TO j4;` Now it correctly outputs: `GRANT ALL ON TYPE status TO j4;`. [#82831][#82831] -- The log fields for captured index usage statistics are no longer redacted. [#83294][#83294] - -

      Bug fixes

      - -- Raft snapshots no longer risk starvation under very high concurrency. Before this fix, it was possible that many of Raft snapshots could be starved and prevented from succeeding due to timeouts, which were accompanied by errors like [`error rate limiting bulk io write: context deadline exceeded`](https://www.cockroachlabs.com/docs/v21.2/common-errors#context-deadline-exceeded). [#81335][#81335] -- [`ST_MinimumBoundingCircle`](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#spatial-functions) no longer panics with `NaN` coordinates. [#81461][#81461] -- Fixed an issue where the `encryptionStatus` field on the [**Stores** debug page](https://www.cockroachlabs.com/docs/v21.2/ui-debug-pages) of the DB Console would display an error instead of displaying encryption details when [encryption-at-rest is enabled](https://www.cockroachlabs.com/docs/v21.2/security-reference/encryption#encryption-at-rest). [#81525][#81525] -- Fixed a panic that was caused by setting the `tracing` session variable using [`SET LOCAL`](https://www.cockroachlabs.com/docs/v21.2/set-vars) or [`ALTER ROLE ... SET`](https://www.cockroachlabs.com/docs/v21.2/alter-role).[#81507][#81507] -- Previously, cancelling `COPY` commands would show an `XXUUU` error, instead of `57014`. This is now fixed. [#81604][#81604] -- Fixed a bug that caused errors with the message `unable to vectorize execution plan: unhandled expression type` in rare cases. This bug had been present since v21.2.0. [#81589][#81589] -- Fixed a bug where [changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) could fail permanently if encountering an error while planning their distribution, even though such errors are usually transient. [#81691][#81691] -- Fixed a gap in disk-stall detection. Previously, disk stalls during filesystem metadata operations could go undetected, inducing deadlocks. Now stalls during these types of operations will correctly kill the process. [#81769][#81769] -- Fixed an issue where CockroachDB would encounter an internal error when executing queries with `lead` or `lag` window functions when the default argument had a different type than the first argument. [#81758][#81758] -- Fixed a bug where queries from a table with a `CHECK` constraint could error out if the query had `ORDER BY` and `LIMIT` clauses. [#81957][#81957] -- Fixed a nil pointer exception during the cleanup of a failed or cancelled [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) job. [#79032][#79032] -- The [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages no longer crash when a search term includes `*`. [#82084][#82084] -- The special characters `*` and `^` are no longer highlighted when searching on the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages. [#82084][#82084] -- Previously, if materialized view creation failed during the backfill stage, CockroachDB would properly clean up the view but not any of the back references. Back and forward references for materialized views are now cleaned up. [#82100][#82100] -- Fixed a bug introduced in v21.2 where the `sql-stats-compaction` job had a chance of not being scheduled during an upgrade from v21.1 to v21.2, causing persisted statement and transaction statistics to be enabled without memory accounting. [#82282][#82282] -- The `--user` argument is no longer ignored when using [`cockroach sql`](https://www.cockroachlabs.com/docs/v21.2/cockroach-sql) in `--insecure` mode. [#82300][#82300] -- The [`SHOW STATISTICS`](https://www.cockroachlabs.com/docs/v21.2/show-statistics) output no longer displays statistics involving dropped columns. [#82318][#82318] -- Fixed the `identity_generation` column in the [`information_schema.columns`](https://www.cockroachlabs.com/docs/v21.2/information-schema#columns) table so its value is either `BY DEFAULT`, `ALWAYS`, or `NULL`.[#82183][#82183] -- Disk write probes during node liveness heartbeats will no longer get stuck on stalled disks, instead returning an error once the operation times out. Additionally, disk probes now run in parallel on nodes with multiple stores. [#81514][#81514] -- Fixed a bug where an unresponsive node (e.g., a node with a stalled disk) could prevent other nodes from acquiring its leases, effectively stalling these ranges until the node was shut down or recovered. [#81816][#81816] -- Previously, when adding a column to a pre-existing table and adding a partial index referencing that column in the transaction, DML operations against the table while the schema change was ongoing would fail. Now these hazardous schema changes are not allowed. [#82670][#82670] -- In v21.1, a bug was introduced whereby default values were recomputed when populating data in new secondary indexes for columns which were added in the same transaction as the index. This would arise, for example, in cases like `ALTER TABLE t ADD COLUMN f FLOAT8 UNIQUE DEFAULT (random())`. If the default expression was not volatile, then the recomputation was harmless. If, however, the default expression was volatile, the data in the secondary index would not match the data in the primary index: a corrupt index would have been created. This bug has now been fixed. [#83223][#83223] -- Previously, a user could be connected to a database but unable to see the metadata for that database in [`pg_catalog`](https://www.cockroachlabs.com/docs/v21.2/pg-catalog) if the user did not have privileges for the database. Now, users can always see the `pg_catalog` metadata for a database they are connected to. [#83359][#83359] -- Fixed a bug where it was possible to have a [virtual computed column](https://www.cockroachlabs.com/docs/v21.2/computed-columns) with an active `NOT NULL` constraint despite having rows in the table for which the column was `NULL`. [#83355][#83355] -- Fixed an issue with the [`soundex` function](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) where certain unicode inputs could result in crashes, errors, or incorrect outputs. [#83434][#83434] -- Fixed a bug where it was possible to accrue [MVCC garbage](https://www.cockroachlabs.com/docs/v21.2/architecture/storage-layer#mvcc) for much longer than needed. [#82969][#82969] - -

      Contributors

      - -This release includes 56 merged PRs by 29 authors. - -[#79032]: https://github.com/cockroachdb/cockroach/pull/79032 -[#81335]: https://github.com/cockroachdb/cockroach/pull/81335 -[#81461]: https://github.com/cockroachdb/cockroach/pull/81461 -[#81507]: https://github.com/cockroachdb/cockroach/pull/81507 -[#81514]: https://github.com/cockroachdb/cockroach/pull/81514 -[#81525]: https://github.com/cockroachdb/cockroach/pull/81525 -[#81547]: https://github.com/cockroachdb/cockroach/pull/81547 -[#81589]: https://github.com/cockroachdb/cockroach/pull/81589 -[#81604]: https://github.com/cockroachdb/cockroach/pull/81604 -[#81691]: https://github.com/cockroachdb/cockroach/pull/81691 -[#81758]: https://github.com/cockroachdb/cockroach/pull/81758 -[#81769]: https://github.com/cockroachdb/cockroach/pull/81769 -[#81816]: https://github.com/cockroachdb/cockroach/pull/81816 -[#81957]: https://github.com/cockroachdb/cockroach/pull/81957 -[#82084]: https://github.com/cockroachdb/cockroach/pull/82084 -[#82100]: https://github.com/cockroachdb/cockroach/pull/82100 -[#82183]: https://github.com/cockroachdb/cockroach/pull/82183 -[#82282]: https://github.com/cockroachdb/cockroach/pull/82282 -[#82300]: https://github.com/cockroachdb/cockroach/pull/82300 -[#82318]: https://github.com/cockroachdb/cockroach/pull/82318 -[#82468]: https://github.com/cockroachdb/cockroach/pull/82468 -[#82636]: https://github.com/cockroachdb/cockroach/pull/82636 -[#82670]: https://github.com/cockroachdb/cockroach/pull/82670 -[#82831]: https://github.com/cockroachdb/cockroach/pull/82831 -[#82901]: https://github.com/cockroachdb/cockroach/pull/82901 -[#82969]: https://github.com/cockroachdb/cockroach/pull/82969 -[#83223]: https://github.com/cockroachdb/cockroach/pull/83223 -[#83294]: https://github.com/cockroachdb/cockroach/pull/83294 -[#83355]: https://github.com/cockroachdb/cockroach/pull/83355 -[#83359]: https://github.com/cockroachdb/cockroach/pull/83359 -[#83434]: https://github.com/cockroachdb/cockroach/pull/83434 \ No newline at end of file diff --git a/src/current/_includes/releases/v21.2/v21.2.14.md b/src/current/_includes/releases/v21.2/v21.2.14.md deleted file mode 100644 index e7e21af52f7..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.14.md +++ /dev/null @@ -1,87 +0,0 @@ -## v21.2.14 - -Release Date: August 1, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Security updates

      - -- Added access control checks to three [multi-region related built-in functions](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#multi-region-functions). [#83987][#83987] - -

      SQL language changes

      - -- The error code reported when trying to use a system or virtual column in the [`STORING`](https://www.cockroachlabs.com/docs/v21.2/create-index#store-columns) clause of an index has been changed from `XXUUU (internal error)` to `0A000 (feature not supported)`. [#83649][#83649] -- The error code reported when attempting to use a system column in an index as a key column has been changed from `XXUUU (internal error)` to `0A000 (feature not supported)`. [#83649][#83649] -- [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v21.2/as-of-system-time) now takes the time zone into account when converting to UTC. For example: `2022-01-01 08:00:00-04:00` is treated the same as `2022-01-01 12:00:00` instead of `2022-01-01 08:00:00` [#84665][#84665] -- Added additional logging for `COPY` statements to the [`SQL_EXEC` channel](https://www.cockroachlabs.com/docs/v21.2/logging#sql_exec) if the `sql.trace.log_statement_execute` cluster setting is set. [#84679][#84679] -- An error message is now logged to the [`SQL_EXEC` channel](https://www.cockroachlabs.com/docs/v21.2/logging#sql_exec) when parsing fails. [#84679][#84679] - -

      Operational changes

      - -- The application name that is part of a SQL session is no longer considered [`redactable`](https://www.cockroachlabs.com/docs/v21.2/configure-logs#redact-logs) information [#83558][#83558] -- The [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-zip) and [`cockroach debug merge-logs`](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-merge-logs) commands will now work with JSON formatted logs. [#83147][#83147] - -

      DB Console changes

      - -- Updated the message when there is no data on the selected time interval on the [**Statements**](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [**Transactions**](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages. [#84624][#84624] - -

      Bug fixes

      - -- Fixed a bug in transaction lifetime management where a lock could be held for a long period of time when adding a new column to a table (or altering a column type). This contention could make the [**Jobs**](https://www.cockroachlabs.com/docs/v21.2/ui-jobs-page) page non-responsive and job adoption slow. [#83475][#83475] -- Fixed a bug that prevented [partial indexes](https://www.cockroachlabs.com/docs/v21.2/partial-indexes) from being used in some query plans. For example, a partial index with a predicate `WHERE a IS NOT NULL` was not previously used if `a` was a `NOT NULL` column. [#83240][#83240] -- CockroachDB now treats `node unavailable` errors as retry-able [changefeed](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) errors. [#82876][#82876] -- CockroachDB now ensures running [changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) do not inhibit node shutdown. [#82876][#82876] -- Index joins now consider functional dependencies from their input when determining equivalent columns instead of returning an internal error. [#83550][#83550] -- Fixed a bug where using [`ADD COLUMN`](https://www.cockroachlabs.com/docs/v21.2/add-column) or [`DROP COLUMN`](https://www.cockroachlabs.com/docs/v21.2/drop-column) with the legacy schema changer could fail on tables with large rows due to exceeding the Raft command max size. [#83817][#83817] -- Fixed a bug that may have caused a panic if a [Kafka](https://www.cockroachlabs.com/docs/v21.2/changefeed-sinks#kafka) server being written to by a [changefeed](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) failed at the wrong moment. [#83922][#83922] -- The [`cockroach debug merge-logs`](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-merge-logs) command no longer returns an error when the log decoder attempts to parse older logs. [#83748][#83748] -- A flush message sent during portal execution in the `pgwire` extended protocol no longer results in an error. [#83954][#83954] -- Previously, [virtual computed columns](https://www.cockroachlabs.com/docs/v21.2/computed-columns) which were marked as `NOT NULL` could be added to new [secondary indexes](https://www.cockroachlabs.com/docs/v21.2/indexes). Now, attempts to add such columns to a secondary index will result in an error. Note that such invalid columns can still be added to tables. Work to resolve that bug is tracked in [#81675](https://github.com/cockroachdb/cockroach/issues/81675). [#83552][#83552] -- Fixed a rare issue where the failure to apply a [Pebble](https://www.cockroachlabs.com/docs/v21.2/architecture/storage-layer#pebble) manifest change (typically due to block device failure or unavailability) could result in an incorrect [LSM](https://www.cockroachlabs.com/docs/v21.2/architecture/storage-layer#log-structured-merge-trees) state. Such a state would likely result in a panic soon after the failed application. This change alters the behavior of Pebble to panic immediately in the case of a failure to apply a change to the manifest. [#83734][#83734] -- Moved connection OK log and metric to the same location after authentication completes for consistency. This resolves an inconsistency (see linked issue) in the [DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-overview) where the log and metric did not match. [#84175][#84175] -- Previously, CockroachDB would not normalize `timestamp/timestamptz - timestamp/timestamptz` like PostgreSQL does in some cases (depending on the query). This is now fixed. [#84002][#84002] -- Fixed an internal error `node ... with MaxCost added to the memo` that could occur during planning when calculating the cardinality of an outer join when one of the inputs had 0 rows. [#84381][#84381] -- Fixed a bug in transaction conflict resolution which could allow backups to wait on long-running transactions. [#83905][#83905] -- Fixed a "fake" memory accounting leak that in rare cases could result in `memory budget exceeded` errors even if the actual RAM usage is within `--max-sql-memory` limit. [#84325][#84325] -- Fixed the following [built-in functions](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators) so that users can only run them if they have [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/select-clause) privileges on the relevant tables: `crdb_internal.revalidate_unique_constraints_in_all_tables`, `crdb_internal.revalidate_unique_constraints_in_table`, and `crdb_internal.revalidate_unique_constraint`. [#84272][#84272] -- Fixed a bug that could cause existing [secondary indexes](https://www.cockroachlabs.com/docs/v21.2/indexes) to be unexpectedly dropped after running an [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v21.2/alter-primary-key) statement, if the new [primary key](https://www.cockroachlabs.com/docs/v21.2/primary-key) column set is a subset of the old primary key column set. [#84578][#84578] -- Fixed a bug that was introduced in release 21.2.0 that could cause increased memory usage when scanning a table with wide rows. [#84921][#84921] - -

      Contributors

      - -This release includes 41 merged PRs by 26 authors. - -[#82876]: https://github.com/cockroachdb/cockroach/pull/82876 -[#83147]: https://github.com/cockroachdb/cockroach/pull/83147 -[#83240]: https://github.com/cockroachdb/cockroach/pull/83240 -[#83475]: https://github.com/cockroachdb/cockroach/pull/83475 -[#83550]: https://github.com/cockroachdb/cockroach/pull/83550 -[#83552]: https://github.com/cockroachdb/cockroach/pull/83552 -[#83558]: https://github.com/cockroachdb/cockroach/pull/83558 -[#83649]: https://github.com/cockroachdb/cockroach/pull/83649 -[#83734]: https://github.com/cockroachdb/cockroach/pull/83734 -[#83748]: https://github.com/cockroachdb/cockroach/pull/83748 -[#83817]: https://github.com/cockroachdb/cockroach/pull/83817 -[#83877]: https://github.com/cockroachdb/cockroach/pull/83877 -[#83905]: https://github.com/cockroachdb/cockroach/pull/83905 -[#83922]: https://github.com/cockroachdb/cockroach/pull/83922 -[#83954]: https://github.com/cockroachdb/cockroach/pull/83954 -[#83987]: https://github.com/cockroachdb/cockroach/pull/83987 -[#84002]: https://github.com/cockroachdb/cockroach/pull/84002 -[#84076]: https://github.com/cockroachdb/cockroach/pull/84076 -[#84096]: https://github.com/cockroachdb/cockroach/pull/84096 -[#84112]: https://github.com/cockroachdb/cockroach/pull/84112 -[#84175]: https://github.com/cockroachdb/cockroach/pull/84175 -[#84272]: https://github.com/cockroachdb/cockroach/pull/84272 -[#84325]: https://github.com/cockroachdb/cockroach/pull/84325 -[#84381]: https://github.com/cockroachdb/cockroach/pull/84381 -[#84578]: https://github.com/cockroachdb/cockroach/pull/84578 -[#84624]: https://github.com/cockroachdb/cockroach/pull/84624 -[#84665]: https://github.com/cockroachdb/cockroach/pull/84665 -[#84679]: https://github.com/cockroachdb/cockroach/pull/84679 -[#84848]: https://github.com/cockroachdb/cockroach/pull/84848 -[#84861]: https://github.com/cockroachdb/cockroach/pull/84861 -[#84864]: https://github.com/cockroachdb/cockroach/pull/84864 -[#84921]: https://github.com/cockroachdb/cockroach/pull/84921 -[2ea582b5c]: https://github.com/cockroachdb/cockroach/commit/2ea582b5c -[6fec4f744]: https://github.com/cockroachdb/cockroach/commit/6fec4f744 diff --git a/src/current/_includes/releases/v21.2/v21.2.15.md b/src/current/_includes/releases/v21.2/v21.2.15.md deleted file mode 100644 index 07f00ae5834..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.15.md +++ /dev/null @@ -1,41 +0,0 @@ -## v21.2.15 - -Release Date: August 29, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Command-line changes

      - -- The CLI now contains a flag (`--log-config-vars`) that allows for environment variables to be specified for expansion within the logging configuration file. This allows a single logging configuration file to service an array of sinks without further manipulation of the configuration file. [#85172][#85172] -- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-zip) now includes all system tables by default, except for a few (a deny list). [#86142][#86142] - -

      Bug fixes

      - -- Fixed incorrect error handling that could cause casts to OID types to fail in some cases. [#85125][#85125] -- Fixed a bug introduced in v20.2 that could cause a panic when an expression contained a geospatial comparison like `~` that was negated. [#84629][#84629] -- Nodes no longer gossip information about table statistics once all nodes in the cluster are upgraded to v21.2. [#85494][#85494] -- Fixed an error that could occur when a query included a limited reverse scan and some rows needed to be retrieved by `GET` requests. [#85583][#85583] -- Fixed a bug where clients could sometimes receive errors due to lease acquisition timeouts of the form `operation "storage.pendingLeaseRequest: requesting lease" timed out after 6s`. [#85429][#85429] -- Fixed a bug that could cause [union](https://www.cockroachlabs.com/docs/v21.2/selection-queries#union-combine-two-queries) queries to return incorrect results in rare cases. [#85653][#85653] -- Fixed a bug that could cause a panic in rare cases when the unnest() [function](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators) was used with a `tuple` return type. [#85347][#85347] -- Fixed an issue where the `NO_INDEX_JOIN` hint could be ignored by the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) in some cases, causing it to create a query plan with an index join. [#86089][#86089] -- Previously, an empty column in the input to `COPY ... FROM CSV` would be treated as an empty string. Now, this is treated as `NULL`. The quoted empty string can still be used to input an empty string. Similarly, if a different `NULL` token is specified in the command options, it can be quoted in order to be treated as the equivalent string value. [#86148][#86148] - -

      Contributors

      - -This release includes 26 merged PRs by 17 authors. - -[#84629]: https://github.com/cockroachdb/cockroach/pull/84629 -[#85022]: https://github.com/cockroachdb/cockroach/pull/85022 -[#85125]: https://github.com/cockroachdb/cockroach/pull/85125 -[#85172]: https://github.com/cockroachdb/cockroach/pull/85172 -[#85342]: https://github.com/cockroachdb/cockroach/pull/85342 -[#85347]: https://github.com/cockroachdb/cockroach/pull/85347 -[#85429]: https://github.com/cockroachdb/cockroach/pull/85429 -[#85463]: https://github.com/cockroachdb/cockroach/pull/85463 -[#85494]: https://github.com/cockroachdb/cockroach/pull/85494 -[#85583]: https://github.com/cockroachdb/cockroach/pull/85583 -[#85653]: https://github.com/cockroachdb/cockroach/pull/85653 -[#86089]: https://github.com/cockroachdb/cockroach/pull/86089 -[#86142]: https://github.com/cockroachdb/cockroach/pull/86142 -[#86148]: https://github.com/cockroachdb/cockroach/pull/86148 diff --git a/src/current/_includes/releases/v21.2/v21.2.16.md b/src/current/_includes/releases/v21.2/v21.2.16.md deleted file mode 100644 index 08ecae913d0..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.16.md +++ /dev/null @@ -1,60 +0,0 @@ -## v21.2.16 - -Release Date: September 29, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      SQL language changes

      - -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.2/explain-analyze) output now contains a warning when the estimated row count for scans is inaccurate. It includes a hint to collect the table [statistics](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer#table-statistics) manually. [#86873][#86873] -- Added a [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `sql.metrics.statement_details.gateway_node.enabled` that controls if the [gateway node](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page#session-details-gateway-node) ID should be persisted to the `system.statement_statistics` table as is or as a `0`, to decrease cardinality on the table. The node ID is still available on the statistics column. [#88636][#88636] - -

      Command-line changes

      - -- The `\c` metacommand in the [`cockroach sql`](https://www.cockroachlabs.com/docs/v21.2/cockroach-sql) shell no longer shows the password in plaintext. [#87550][#87550] - -

      Bug fixes

      - -- Fixed a crash that could happen when formatting queries that have placeholder `BitArray` arguments. [#86886][#86886] -- Previously, queries with many joins and projections of multi-column expressions, e.g., `col1 + col2`, either present in the query or within a virtual column definition, could experience very long optimization times or hangs, where the query is never sent for execution. This has now been fixed. [#85872][#85872] -- Fixed a bug that caused some special characters to be misread if `COPY ... FROM` into a `TEXT[]` column was reading them. [#86888][#86888] -- Fixed a crash/panic that could occur if placeholder arguments were used with the `with_min_timestamp` or `with_max_staleness` [functions](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators). [#86882][#86882] -- Previously, escaping a double quote (`"`) with [`COPY`](https://www.cockroachlabs.com/docs/v21.2/copy-from) in `CSV` mode could ignore all subsequent lines in the same `COPY` if an `ESCAPE` clause were specified. This is now resolved. [#86976][#86976] -- Previously, CockroachDB would return an internal error when evaluating the `json_build_object` [built-in](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators) when an [enum](https://www.cockroachlabs.com/docs/v21.2/enum) datum was passed as the first argument. This is now fixed. [#86850][#86850] -- Fixed a vulnerability in the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) that could cause a panic in rare cases when planning complex queries with [`ORDER BY`](https://www.cockroachlabs.com/docs/v21.2/order-by). [#86807][#86807] -- An active [replication report](https://www.cockroachlabs.com/docs/v21.2/query-replication-reports ) update could prevent a node from shutting down until it completed. The report update is now cancelled on node shutdown instead. [#87923][#87923] -- Fixed an issue where imports and rebalances were being slowed down due to the accumulation of empty directories from range snapshot applications. [#88142][#88142] -- Fixed a bug where CockroachDB could incorrectly not fetch rows with `NULL` values when reading from the unique [secondary index](https://www.cockroachlabs.com/docs/v21.2/indexes) when multiple [column families](https://www.cockroachlabs.com/docs/v21.2/column-families) were defined for the table, and the index didn't store some of the `NOT NULL` columns. [#88207][#88207] -- Fixed a bug where if [telemetry](https://www.cockroachlabs.com/docs/v21.2/logging#telemetry) is enabled, [`COPY`](https://www.cockroachlabs.com/docs/v21.2/copy-from) can sometimes cause the server to crash. [#88326][#88326] -- Fixed a bug that could cause nodes to crash in rare cases when executing [apply joins](https://www.cockroachlabs.com/docs/v21.2/joins#apply-joins) in query plans. [#88518][#88518] -- Fixed a bug that caused errors in rare cases when executing queries with [correlated `WITH` expressions](https://www.cockroachlabs.com/docs/v21.2/common-table-expressions#correlated-common-table-expressions). This bug was present since correlated `WITH` expressions were introduced in [v21.2.0]({% link releases/v21.2.md %}?#v21-2-0). [#88518][#88518] - -

      Performance improvements

      - -- Long-running SQL sessions are now less likely to maintain large allocations for long periods of time, which decreases the risk of OOM and improves memory utilization. [#86798][#86798] -- Made sending and receiving [Raft](https://www.cockroachlabs.com/docs/v21.2/architecture/replication-layer#raft) queue sizes match. Previously the receiver could unnecessarily drop messages in situations when the sending queue is bigger than the receiving one. [cockroachdb/cockroach#88447][#88447] - -

      Contributors

      - -This release includes 37 merged PRs by 20 authors. - -[#85872]: https://github.com/cockroachdb/cockroach/pull/85872 -[#86798]: https://github.com/cockroachdb/cockroach/pull/86798 -[#86807]: https://github.com/cockroachdb/cockroach/pull/86807 -[#86850]: https://github.com/cockroachdb/cockroach/pull/86850 -[#86873]: https://github.com/cockroachdb/cockroach/pull/86873 -[#86882]: https://github.com/cockroachdb/cockroach/pull/86882 -[#86886]: https://github.com/cockroachdb/cockroach/pull/86886 -[#86888]: https://github.com/cockroachdb/cockroach/pull/86888 -[#86976]: https://github.com/cockroachdb/cockroach/pull/86976 -[#87058]: https://github.com/cockroachdb/cockroach/pull/87058 -[#87125]: https://github.com/cockroachdb/cockroach/pull/87125 -[#87550]: https://github.com/cockroachdb/cockroach/pull/87550 -[#87707]: https://github.com/cockroachdb/cockroach/pull/87707 -[#87923]: https://github.com/cockroachdb/cockroach/pull/87923 -[#88142]: https://github.com/cockroachdb/cockroach/pull/88142 -[#88207]: https://github.com/cockroachdb/cockroach/pull/88207 -[#88326]: https://github.com/cockroachdb/cockroach/pull/88326 -[#88447]: https://github.com/cockroachdb/cockroach/pull/88447 -[#88518]: https://github.com/cockroachdb/cockroach/pull/88518 -[#88636]: https://github.com/cockroachdb/cockroach/pull/88636 diff --git a/src/current/_includes/releases/v21.2/v21.2.17.md b/src/current/_includes/releases/v21.2/v21.2.17.md deleted file mode 100644 index 09e8ee5d284..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.17.md +++ /dev/null @@ -1,36 +0,0 @@ -## v21.2.17 - -Release Date: October 17, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Bug fixes

      - -- Fixed a rare internal error that could occur during planning when a query predicate included values close to the maximum or minimum `int64` value. The error, `estimated row count must be non-zero`, has now been fixed. [#88955][#88955] -- Fixed a longstanding bug that could cause the optimizer to produce an incorrect plan when aggregate functions `st_makeline` or `st_extent` were called with invalid-type and empty inputs respectively. [#88954][#88954] -- Fixed a bug that caused high SQL tail latencies during background rebalancing in the cluster. [#88738][#88738] -- Fixed a bug where draining or drained nodes could re-acquire leases during an import or an index backfill. [#88725][#88725] -- Fixed a bug that caused incorrect evaluation of expressions in the form `col +/- const1 ? const2`, where `const1` and `const2` are constant values and `?` is any comparison operator. The bug was caused by operator overflow when the optimizer attempted to simplify these expressions to have a single constant value. [#88971][#88971] -- Fixed a bug that has existed since v2.1.0 where queries containing a subquery with `EXCEPT` could produce incorrect results. This could happen if the optimizer could guarantee that the left side of the `EXCEPT` always returned more rows than the right side. In this case, the optimizer made a faulty assumption that the `EXCEPT` subquery always returned at least one row, which could cause the optimizer to perform an invalid transformation, possibly causing the full query to return incorrect results. [#89132][#89132] -- CockroachDB will now flush the write-ahead log on consistency checker failures when writing storage checkpoints. [#89401][#89401] -- Fixed a bug that could cause incorrect results from the floor division operator, `//`, when the numerator is non-constant and the denominator is the constant `1`. [#89264][#89264] -- Fixed a source of internal connectivity problems that would resolve after restarting the affected node. [#89618][#89618] -- Fixed errors that may occur in automatic statistics collection when the cluster setting `sql.stats.automatic_collection.min_stale_rows` is set to `0`. [#89706][#89706] -- Fixed a bug that caused spurious failures when running a restore. [#89019][#89019] - -

      Contributors

      - -This release includes 17 merged PRs by 13 authors. - -[#88725]: https://github.com/cockroachdb/cockroach/pull/88725 -[#88738]: https://github.com/cockroachdb/cockroach/pull/88738 -[#88954]: https://github.com/cockroachdb/cockroach/pull/88954 -[#88955]: https://github.com/cockroachdb/cockroach/pull/88955 -[#88971]: https://github.com/cockroachdb/cockroach/pull/88971 -[#89019]: https://github.com/cockroachdb/cockroach/pull/89019 -[#89132]: https://github.com/cockroachdb/cockroach/pull/89132 -[#89264]: https://github.com/cockroachdb/cockroach/pull/89264 -[#89401]: https://github.com/cockroachdb/cockroach/pull/89401 -[#89618]: https://github.com/cockroachdb/cockroach/pull/89618 -[#89667]: https://github.com/cockroachdb/cockroach/pull/89667 -[#89706]: https://github.com/cockroachdb/cockroach/pull/89706 diff --git a/src/current/_includes/releases/v21.2/v21.2.2.md b/src/current/_includes/releases/v21.2/v21.2.2.md deleted file mode 100644 index 20b8c57893f..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.2.md +++ /dev/null @@ -1,182 +0,0 @@ -## v21.2.2 - -Release Date: December 1, 2021 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      SQL language changes

      - -- Fixed an oversight in which a full scan of a [partial index](https://www.cockroachlabs.com/docs/v21.2/partial-indexes) could be rejected due to the [`disallow_full_table_scans`](https://www.cockroachlabs.com/docs/v21.2/set-vars) setting. Full scans of partial indexes will no longer be rejected if `disallow_full_table_scans` is true, since a full scan of a partial index must be a constrained scan of the table. [#71437][#71437] -- The optimizer has been updated so that if [`disallow_full_table_scans`](https://www.cockroachlabs.com/docs/v21.2/set-vars) is true, it will never plan a full table scan with an estimated row count greater than `large_full_scan_rows`. If no alternative plan is possible, an error will be returned, just as it was before. However, cases where an alternative plan is possible will no longer produce an error, since the alternative plan will be chosen. As a result, users should see fewer errors due to `disallow_full_table_scans`. A side effect of this change is that if `disallow_full_table_scans` is set along with statement-level hints such as an index hint, the optimizer will try to avoid a full table scan while also respecting the index hint. If this is not possible, the optimizer will return an error and might not log the attempted full table scan or update the `sql.guardrails.full_scan_rejected.count` metric. If no index hint is used, the full scan will be logged and the metric updated. [#71437][#71437] -- Added support for a new [`index hint`](https://www.cockroachlabs.com/docs/v21.2/table-expressions#force-index-selection), `NO_FULL_SCAN`, which will prevent the optimizer from planning a full scan for the specified table. The hint can be used in the same way as other existing index hints. For example, `SELECT * FROM table_name@{NO_FULL_SCAN};`. Note that a full scan of a partial index may still be planned, unless `NO_FULL_SCAN` is forced in combination with a specific partial index via `FORCE_INDEX=index_name`. [#71437][#71437] -- [`EXPLAIN ANALYZE (DEBUG)`](https://www.cockroachlabs.com/docs/v21.2/explain-analyze#debug-option) can now be used by [Serverless clusters](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart). [#71969][#71969] -- The [session variables](https://www.cockroachlabs.com/docs/v21.2/set-vars#supported-variables) `LC_COLLATE`, `LC_CTYPE`, `LC_MESSAGES`, `LC_MONETARY`, `LC_NUMERIC`, and `LC_TIME` were added for compatibility with PostgreSQL. They only support the `C.UTF-8` locale. [#72260][#72260] -- Fixed an issue where the error message when creating the wrong type of forward index on a JSONB column was not clear. It now suggests creating a [GIN index](https://www.cockroachlabs.com/docs/v21.2/inverted-indexes). [#72361][#72361] -- Changed the `pgerror` code from `CD` to `XC` for CockroachDB-specific errors. This is because the "C" class is reserved for the SQL standard. The `pgcode` `CDB00` used for unsatisfiable bounded staleness is now `XCUBS`. [#70448][#70448] -- The [`CREATE TABLE ... LIKE ...`](https://www.cockroachlabs.com/docs/v21.2/create-table) statement now copies `ON UPDATE` definitions for `INCLUDING DEFAULTS`. [#70537][#70537] -- CockroachDB now shows `indpred` on the [`pg_index` table](https://www.cockroachlabs.com/docs/v21.2/pg-catalog) for [partial indexes](https://www.cockroachlabs.com/docs/v21.2/partial-indexes). This was previously `NULL` for partial indexes. [#70884][#70884] -- Fixed a bug where [`LINESTRINGZ`, `LINESTRINGZM`, and `LINESTRINGM`](https://www.cockroachlabs.com/docs/v21.2/linestring) could not be used as a column type. [#70747][#70747] -- Added the [`crdb_internal.reset_index_usage_stats()` function](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators) to clear index usage stats. This can be invoked from the SQL shell. [#72843][#72843] -- The string "visible" is now usable as a table or column name without extra quoting. [#70563][#70563] -- The `aggregation_interval` column has been added to the [crdb_internal.statement_statistics and crdb_internal.transaction_statistics](https://www.cockroachlabs.com/docs/v21.2/crdb-internal) tables, representing the aggregation duration of the respective statistics. [#72941][#72941] -- The `diagnostics.sql_stats_reset.interval` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) was removed. In previous version of CockroachDB, in-memory SQL statistics were reset periodically. This behavior is now removed since persisted SQL Statistics were introduced in v21.2. The `diagnostics.forced_sql_stats_reset.interval` cluster setting now only controls the reset of the reported SQL statistics if it is not collected by the telemetry reporter. [#72941][#72941] - -

      Operational changes

      - -- [Job IDs](https://www.cockroachlabs.com/docs/v21.2/show-jobs) and [Session IDs](https://www.cockroachlabs.com/docs/v21.2/show-sessions) are no longer redacted. These values do not represent sensitive or identifiable data, but do aid in debugging problems with the jobs system. [#72975][#72975] - -

      Command-line changes

      - -- The [SQL shell](https://www.cockroachlabs.com/docs/v21.2/cockroach-sql) now supports a `\statement-diag` command for listing and downloading [statement diagnostics bundles](https://www.cockroachlabs.com/docs/v21.2/explain-analyze#debug-option). [#71969][#71969] - -

      API endpoint changes

      - -- The `aggregationInterval` field has been added to combined statements response. [#72941][#72941] - -

      DB Console changes

      - -- The [Session](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page), [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page), and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages are now grouped inside the new SQL Activity page. [#72052][#72052] -- The [Statement Details page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-details-page) and [Transaction Details page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page#transaction-details-page) now show the mean rows written metric. [#72006][#72006] -- A new **Rows Written** column has been added to the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) tables. This column will not show up by default, but can be selected from the column selector. [#72006][#72006] -- The tooltip text on the Statement column of the [Sessions table](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page) was edited to clarify that we show only currently active statements. [#72100][#72100] -- The **Not Found** page was updated. [#72758][#72758] -- The [Statements page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) now uses `$ internal` on the filter to align with the [Transactions page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page). [#72755][#72755] -- The Application filter on the [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) and [Statement](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) pages is now multi-select. [#72755][#72755] -- The default value when no Application is selected on the Transactions and Statements filter now excludes internal transactions and statements. [#72755][#72755] -- There is now a new page that is displayed when something goes wrong and the page crashes. [#72796][#72796] -- Fixed an issue on the [Metrics pages](https://www.cockroachlabs.com/docs/v21.2/ui-overview-dashboard) where graphs didn't show data for date ranges older than 10 days. [#72864][#72864] -- [Metrics pages](https://www.cockroachlabs.com/docs/v21.2/ui-overview-dashboard) now show gaps instead of data interpolation. [#72864][#72864] and [#72744][#72744] -- Node events now display a permission error rather than an internal server error when the user does not have [`ADMIN` privileges to view events](https://www.cockroachlabs.com/docs/v21.2/ui-overview). [#72792][#72792] -- Fixed an issue with drag-to-zoom granularity. [#72855][#72855] -- The `Interval Start Time (UTC)` column on the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages has been renamed `Aggregation Interval (UTC)` and is the interval of aggregation. The [Statement Details page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-details-page) now also displays the interval for which the user is viewing statement details. [#72941][#72941] -- The [Transaction Details page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page#transaction-details-page) now shows statement statistics scoped by the transaction fingerprint. [#72941][#72941] - -

      Bug fixes

      - -- When using the `json-fluent` and `json-fluent-compact` [logging formats](https://www.cockroachlabs.com/docs/v21.2/log-formats), the `tag` field now uses the same normalization algorithm as used for output to files. That is, if the CockroachDB executable is renamed to contain periods (e.g., `cockroach.testbinary`), the periods are now eliminated instead of replaced by `_`. This is the behavior that was originally intended. This change does not affect deployments that use the standard executable name `cockroach`. [#71075][#71075] -- The [`cockroach debug zip` command](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-zip), the [`cockroach debug list-files` command](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-list-files), and the [Advanced Debug page](https://www.cockroachlabs.com/docs/v21.2/ui-debug-pages) that enables log file access, are now able to gather log files stored across all configured logging directories. Prior to this patch, only log files from the directory associated with the `DEV` file sink were visible. This bug had existed since CockroachDB v19.x. Note that the behavior remains incomplete if two or more file groups in the logging configuration use similar names that only differ in their use of periods (e.g., a file group named `one.two` and another one named `onetwo`). To avoid any issue related to this situation, use more distinct file group names. [#71075][#71075] -- Fixed a bug where usernames in [`ALTER TABLE ... OWNER TO`](https://www.cockroachlabs.com/docs/v21.2/owner-to) would not be normalized to lower case. [#72470][#72470] -- Fixed a bug where the Show All filter on the [Statements page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) didn't display all the statements when with an empty string in the search box. [#72052][#72052] -- Fixed a bug in prior betas of v21.2 where some error codes returned when looking for a descriptor in a non-existent database were changed from `UndefinedDatabase (3D000)` to `UndefinedObject (42704)`. Name resolution when the current database is undefined will now return `UndefinedDatabase`. [#71566][#71566] -- Fixed an incorrect `no data source matches prefix` error for queries that use a set-returning function on the right-hand side of a [`JOIN`](https://www.cockroachlabs.com/docs/v21.2/joins) unless `LATERAL` is explicitly specified. [#71445][#71445] -- Fixed a bug where using [`CREATE TABLE AS ....`](https://www.cockroachlabs.com/docs/v21.2/create-table-as) with a source query that referred to a sequence would not work. [#71541][#71541] -- Support was added for the `"{}"` format for array columns in [`COPY FROM STDIN WITH CSV`](https://www.cockroachlabs.com/docs/v21.2/copy-from). [#72693][#72693] -- Fixed a bug which caused [`ALTER COLUMN TYPE`](https://www.cockroachlabs.com/docs/v21.2/alter-column) statements to incorrectly fail. [#71165][#71165] -- Fixed potential descriptor corruption bug for tables with a column with a [`DEFAULT`](https://www.cockroachlabs.com/docs/v21.2/default-value) expression referencing a [`SEQUENCE`](https://www.cockroachlabs.com/docs/v21.2/create-sequence) and with an [`ON UPDATE` expression](https://www.cockroachlabs.com/docs/v21.2/create-table#on-update-expressions). [#72362][#72362] -- Fixed a bug where [schema changes](https://www.cockroachlabs.com/docs/v21.2/online-schema-changes) running during node shutdown could sometimes fail permanently when they should not. [#72333][#72333] -- Fixed a panic that could occur with invalid [GeoJSON](https://www.cockroachlabs.com/docs/v21.2/geojson) input using `ST_GeomFromGeoJSON/ST_GeogFromGeoJSON`. [#71309][#71309] -- Fixed a bug where specifying [`IntervalStyle`](https://www.cockroachlabs.com/docs/v21.2/interval) or [`DateStyle`](https://www.cockroachlabs.com/docs/v21.2/date) on `options=-c...` in a [connection string](https://www.cockroachlabs.com/docs/v21.2/connection-parameters#connect-using-a-url) would fail, even if the `sql.defaults.datestyle.enabled` and `sql.defaults.intervalstyle.enabled` [cluster settings](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) were set. [#72067][#72067] -- Fixed a bug where [session variables](https://www.cockroachlabs.com/docs/v21.2/set-vars) passed in the [connection string](https://www.cockroachlabs.com/docs/v21.2/connection-parameters#connect-using-a-url) were case-sensitive. Now they are all correctly normalized to lower case. [#72067][#72067] -- Fixed a bug where `atttypmod` in [`pg_catalog.pg_attributes`](https://www.cockroachlabs.com/docs/v21.2/pg-catalog) for [`DECIMAL`](https://www.cockroachlabs.com/docs/v21.2/decimal) types with precision but no width was incorrectly `-1`. [#72074][#72074] -- Fixed a bug where the [`setval`](https://www.cockroachlabs.com/docs/v21.2/create-sequence#sequence-functions) function did not invalidate cached sequence values. [#71821][#71821] -- Fixed a bug where when creating an object [default privileges](https://www.cockroachlabs.com/docs/v21.2/authorization) from users that were not the user creating the object would be added to the privileges of the object. This fix ensures only the relevant default privileges are applied. [#72410][#72410] -- Fixed a bug where Z and M coordinate columns caused a panic for [`geometry_columns` and `geography_columns`](https://www.cockroachlabs.com/docs/v21.2/spatial-glossary#spatial-system-tables). [#70814][#70814] -- Fixed a bug where certain [schema changes](https://www.cockroachlabs.com/docs/v21.2/online-schema-changes) (e.g., SET NULL) did not work if there was an [expression index](https://www.cockroachlabs.com/docs/v21.2/expression-indexes) on the table. [#72024][#72024] -- The connect timeout for [`grpc` connections](https://www.cockroachlabs.com/docs/v21.2/architecture/distribution-layer#grpc) is set to 20s to match the pre-v20.2 default value. [#71517][#71517] -- [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) no longer crashes when encountering unresolved write intents. [#71983][#71983] -- Fixed an incorrect bug hint for the `sql.defaults.datestyle.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings). [#70900][#70900] -- Fixed a bug which caused internal errors when collecting statistics on tables with [virtual computed columns](https://www.cockroachlabs.com/docs/v21.2/computed-columns). [#71234][#71234] -- Fixed a bug that incorrectly populated the `indkey` column of [`pg_catalog.pg_index`](https://www.cockroachlabs.com/docs/v21.2/pg-catalog) for [expression indexes](https://www.cockroachlabs.com/docs/v21.2/expression-indexes). This bug was present since the introduction of expression indexes in version 21.2.0. [#72064][#72064] -- Fixed a bug where some queries against the [`pg_catalog.pg_type`](https://www.cockroachlabs.com/docs/v21.2/pg-catalog) could throw an error if they looked up a non-existent ID. [#72885][#72885] -- Corrected how the `type` displays for ZM shapes [`geometry_columns`](https://www.cockroachlabs.com/docs/v21.2/spatial-glossary#spatial-system-tables) to match PostGIS output. This previously incorrectly included the Z/M lettering. [#72809][#72809] -- Corrected how `type` displays in [`geometry_columns`](https://www.cockroachlabs.com/docs/v21.2/spatial-glossary#spatial-system-tables) to better match PostGIS. This previously used the wrong case. [#72809][#72809] -- Fixed a bug where CockroachDB could encounter an internal error when executing a [zigzag join](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer#zigzag-joins) in some cases (when there are multiple filters present and at least one filter refers to the column that is part of `STORING` clause of the secondary index that is used by the zigzag join). [#71253][#71253] -- Fixed a bug where CockroachDB could not set the `TableOID` and `TableAttributeNumber` attributes of the `RowDescription` message of the `pgwire` protocol in some cases (these values would be left as 0). [#72450][#72450] -- Fixed a bug where CockroachDB could encounter an internal error or crash when some queries involving tuples with [`ENUMs`](https://www.cockroachlabs.com/docs/v21.2/enum) were executed in a distributed manner. [#72482][#72482] -- Fixed a bug where if tracing (the `sql.trace.txn.enable_threshold` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings)) was enabled on the cluster, the statement diagnostics collection ([`EXPLAIN ANALYZE (DEBUG)`](https://www.cockroachlabs.com/docs/v21.2/explain-analyze#debug-option)) wouldn't work. [#70023][#70023] -- Fixed a bug causing tracing to external tracers to inadvertently stop after the Enqueue Range or the Allocator debug pages was used. [#72465][#72465] -- Fixed a bug preventing tuple type labels from being propagated across queries when run under [DistSQL](https://www.cockroachlabs.com/docs/v21.2/architecture/sql-layer#distsql). [#70392][#70392] -- CockroachDB is now less likely to OOM when queries reading a lot of data are issued with high concurrency (these queries are likely to hit the memory budget determined by [`--max-sql-memory` startup parameter](https://www.cockroachlabs.com/docs/v21.2/cockroach-start#general)). [#70809][#70809] -- The `indexprs` column of [`pg_catalog.pg_index`](https://www.cockroachlabs.com/docs/v21.2/pg-catalog) is now populated with string representations of every expression element in the index. If the index is not an [expression index](https://www.cockroachlabs.com/docs/v21.2/expression-indexes), `indexprs` is `NULL`. The `indexdef` column of `pg_catalog.pg_indexes` and the `indpred` column of `pg_catalog.pg_index` now correctly display user-defined types. [#72870][#72870] -- Fixed a bug where introspection tables and error messages would not correctly display intervals according to the `intervalstyle` [session variable](https://www.cockroachlabs.com/docs/v21.2/set-vars). [#72690][#72690] -- Fixed a bug where index definitions in [`pg_catalog.pg_indexes`](https://www.cockroachlabs.com/docs/v21.2/pg-catalog) would not format intervals according to the `intervalstyle` [session variable](https://www.cockroachlabs.com/docs/v21.2/set-vars). [#72903][#72903] -- [Statement statistics](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-statistics) are now grouped by the statement's corresponding transaction fingerprints. [#72941][#72941] -- The query backing [`crdb_internal.cluster_contended_indexes`](https://www.cockroachlabs.com/docs/v21.2/crdb-internal) improperly assumed that index IDs were unique across the database. This change adds the proper scoping by table descriptor ID, truing up the contents of that view. [#73025][#73025] - -

      Performance improvements

      - -- Fixed a performance regression in planning that could occur for simple queries on schemas with a large number of indexes. [#72240][#72240] -- The conversion of [Well Known Text](https://www.cockroachlabs.com/docs/v21.2/well-known-text) to a spatial type is improved. [#70182][#70182] -- Improved [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) performance in cases where it encounters large numbers of unresolved write intents. [#72271][#72271] -- Fixed a limitation that made creating [partial indexes](https://www.cockroachlabs.com/docs/v21.2/partial-indexes) inefficient. [#70205][#70205] -- Backfills initiated by schema changes now periodically checkpoint progress to avoid excessive re-emitting of already emitted spans. [#72788][#72788] - -
      - -

      Contributors

      - -This release includes 133 merged PRs by 45 authors. -We would like to thank the following contributors from the CockroachDB community: - -- neeral - -
      - -[#70023]: https://github.com/cockroachdb/cockroach/pull/70023 -[#70182]: https://github.com/cockroachdb/cockroach/pull/70182 -[#70205]: https://github.com/cockroachdb/cockroach/pull/70205 -[#70392]: https://github.com/cockroachdb/cockroach/pull/70392 -[#70448]: https://github.com/cockroachdb/cockroach/pull/70448 -[#70489]: https://github.com/cockroachdb/cockroach/pull/70489 -[#70537]: https://github.com/cockroachdb/cockroach/pull/70537 -[#70563]: https://github.com/cockroachdb/cockroach/pull/70563 -[#70747]: https://github.com/cockroachdb/cockroach/pull/70747 -[#70809]: https://github.com/cockroachdb/cockroach/pull/70809 -[#70814]: https://github.com/cockroachdb/cockroach/pull/70814 -[#70884]: https://github.com/cockroachdb/cockroach/pull/70884 -[#70900]: https://github.com/cockroachdb/cockroach/pull/70900 -[#71075]: https://github.com/cockroachdb/cockroach/pull/71075 -[#71165]: https://github.com/cockroachdb/cockroach/pull/71165 -[#71234]: https://github.com/cockroachdb/cockroach/pull/71234 -[#71253]: https://github.com/cockroachdb/cockroach/pull/71253 -[#71309]: https://github.com/cockroachdb/cockroach/pull/71309 -[#71437]: https://github.com/cockroachdb/cockroach/pull/71437 -[#71445]: https://github.com/cockroachdb/cockroach/pull/71445 -[#71517]: https://github.com/cockroachdb/cockroach/pull/71517 -[#71541]: https://github.com/cockroachdb/cockroach/pull/71541 -[#71566]: https://github.com/cockroachdb/cockroach/pull/71566 -[#71821]: https://github.com/cockroachdb/cockroach/pull/71821 -[#71964]: https://github.com/cockroachdb/cockroach/pull/71964 -[#71969]: https://github.com/cockroachdb/cockroach/pull/71969 -[#71983]: https://github.com/cockroachdb/cockroach/pull/71983 -[#72006]: https://github.com/cockroachdb/cockroach/pull/72006 -[#72024]: https://github.com/cockroachdb/cockroach/pull/72024 -[#72051]: https://github.com/cockroachdb/cockroach/pull/72051 -[#72052]: https://github.com/cockroachdb/cockroach/pull/72052 -[#72064]: https://github.com/cockroachdb/cockroach/pull/72064 -[#72067]: https://github.com/cockroachdb/cockroach/pull/72067 -[#72074]: https://github.com/cockroachdb/cockroach/pull/72074 -[#72100]: https://github.com/cockroachdb/cockroach/pull/72100 -[#72240]: https://github.com/cockroachdb/cockroach/pull/72240 -[#72260]: https://github.com/cockroachdb/cockroach/pull/72260 -[#72271]: https://github.com/cockroachdb/cockroach/pull/72271 -[#72333]: https://github.com/cockroachdb/cockroach/pull/72333 -[#72361]: https://github.com/cockroachdb/cockroach/pull/72361 -[#72362]: https://github.com/cockroachdb/cockroach/pull/72362 -[#72410]: https://github.com/cockroachdb/cockroach/pull/72410 -[#72450]: https://github.com/cockroachdb/cockroach/pull/72450 -[#72465]: https://github.com/cockroachdb/cockroach/pull/72465 -[#72470]: https://github.com/cockroachdb/cockroach/pull/72470 -[#72482]: https://github.com/cockroachdb/cockroach/pull/72482 -[#72657]: https://github.com/cockroachdb/cockroach/pull/72657 -[#72690]: https://github.com/cockroachdb/cockroach/pull/72690 -[#72693]: https://github.com/cockroachdb/cockroach/pull/72693 -[#72744]: https://github.com/cockroachdb/cockroach/pull/72744 -[#72755]: https://github.com/cockroachdb/cockroach/pull/72755 -[#72758]: https://github.com/cockroachdb/cockroach/pull/72758 -[#72788]: https://github.com/cockroachdb/cockroach/pull/72788 -[#72792]: https://github.com/cockroachdb/cockroach/pull/72792 -[#72796]: https://github.com/cockroachdb/cockroach/pull/72796 -[#72809]: https://github.com/cockroachdb/cockroach/pull/72809 -[#72843]: https://github.com/cockroachdb/cockroach/pull/72843 -[#72855]: https://github.com/cockroachdb/cockroach/pull/72855 -[#72864]: https://github.com/cockroachdb/cockroach/pull/72864 -[#72870]: https://github.com/cockroachdb/cockroach/pull/72870 -[#72885]: https://github.com/cockroachdb/cockroach/pull/72885 -[#72903]: https://github.com/cockroachdb/cockroach/pull/72903 -[#72941]: https://github.com/cockroachdb/cockroach/pull/72941 -[#72951]: https://github.com/cockroachdb/cockroach/pull/72951 -[#72975]: https://github.com/cockroachdb/cockroach/pull/72975 -[#73025]: https://github.com/cockroachdb/cockroach/pull/73025 -[7b7cbcd33]: https://github.com/cockroachdb/cockroach/commit/7b7cbcd33 -[8cb96beab]: https://github.com/cockroachdb/cockroach/commit/8cb96beab diff --git a/src/current/_includes/releases/v21.2/v21.2.3.md b/src/current/_includes/releases/v21.2/v21.2.3.md deleted file mode 100644 index ccd951d9c33..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.3.md +++ /dev/null @@ -1,121 +0,0 @@ -## v21.2.3 - -Release Date: December 14, 2021 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Enterprise edition changes

      - -- Fixed a limitation of [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import) for tables using user-defined types whereby any change to the set of tables or views which reference the type or any changes to privileges on the type during the `IMPORT` would lead to failure. Now, new references to the type or [`GRANT`](https://www.cockroachlabs.com/docs/v21.2/grant) or [`REVOKE`](https://www.cockroachlabs.com/docs/v21.2/revoke) operations performed while the `IMPORT` is ongoing will not cause failure. [#71500][#71500] -- Fixed a bug where [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) could sometimes map OIDs to invalid types in certain circumstances containing user-defined types. [#73119][#73119] - -

      SQL language changes

      - -- The experimental [`ALTER COLUMN TYPE`](https://www.cockroachlabs.com/docs/v21.2/alter-column#altering-column-data-types) statement is no longer permitted when the column is stored in a [secondary index](https://www.cockroachlabs.com/docs/v21.2/schema-design-indexes). Prior to this change, that was the only sort of secondary index membership which was allowed, but the result of the operation was a subtly corrupted table. [#72797][#72797] -- Statements containing multiple [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v21.2/update-data#use-insert-on-conflict), [`UPSERT`](https://www.cockroachlabs.com/docs/v21.2/upsert), [`UPDATE`](https://www.cockroachlabs.com/docs/v21.2/update), or [`DELETE`](https://www.cockroachlabs.com/docs/v21.2/delete) subqueries can cause data corruption if they modify the same row multiple times. For example, the following `SELECT 1 `statement will cause corruption of table `t`: - - ~~~ sql - CREATE TABLE t (i INT, j INT, PRIMARY KEY (i), INDEX (j)); INSERT INTO t VALUES (0, 0); WITH cte1 AS (UPDATE t SET j = 1 WHERE i = 0 RETURNING *), cte2 AS (UPDATE t SET j = 2 WHERE i = 0 RETURNING *) SELECT 1; - ~~~ - - Until this is fixed, this change disallows statements with multiple subqueries that modify the same table. Applications can work around this by rewriting problematic statements. For example, the query above can be rewritten as an explicit multi-statement transaction: - - ~~~ sql - BEGIN; UPDATE t SET j = 1 WHERE i = 0; UPDATE t SET j = 2 WHERE i = 0; SELECT 1; COMMIT; - ~~~ - - Or, if it doesn't matter which update "wins", it can be written as multiple non-mutating CTEs on an `UPDATE` statement: - - ~~~ sql - WITH cte1 AS (SELECT 1), cte2 AS (SELECT 2) UPDATE t SET j = x.j FROM (SELECT * FROM cte1 UNION ALL SELECT * FROM cte2) AS x (j) WHERE i = 0 RETURNING 1; - ~~~ - - Which in this case could be written more simply as: - - ~~~ sql - UPDATE t SET j = x.j FROM (VALUES (1), (2)) AS x (j) WHERE i = 0 RETURNING 1; - ~~~ - - Note that in these last two rewrites the first update will win, rather than the last. None of these rewrites suffer from the corruption problem. To override this change and allow these statements in spite of the risk of corruption, applications can: - - ~~~sql - SET CLUSTER SETTING sql.multiple_modifications_of_table.enabled = true - ~~~ - - However, with the `sql.multiple_modifications_of_table.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) enabled, there is nothing to prevent this type of corruption from occurring if the same row is modified multiple times by a single statement. To check for corruption, use the `EXPERIMENTAL SCRUB` command: - - ~~~ sql - EXPERIMENTAL SCRUB TABLE t WITH OPTIONS INDEX ALL; - ~~~ - - [#71595][#71595] - -- [`RESTORE TABLE`](https://www.cockroachlabs.com/docs/v21.2/restore) for a regional by row table into a multiregion database with the same regions as the backed up database is now allowed. The user must ensure that the regions in the backed up database and the database being restored into match, and are added in the same order, for the [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) to work. [#72088][#72088] -- The structured payloads used for telemetry logs now include two new fields: `CostEstimate` and `Distribution`. `CostEstimate` is the cost of the query as estimated by the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer), and `Distribution` is the distribution of the DistSQL query plan (local, full, or partial). [#73410][#73410] -- Fixed a bug which allowed [computed columns](https://www.cockroachlabs.com/docs/v21.2/computed-columns) to also have [`DEFAULT`](https://www.cockroachlabs.com/docs/v21.2/default-value) expressions. [#73190][#73190] - -

      DB Console changes

      - -- When requesting the `pprofui` endpoints from the [**Advanced Debug** page](https://www.cockroachlabs.com/docs/v21.2/ui-debug-pages) in DB Console, operators can now query by node ID in order to request `pprofui` data from any node in the cluster without having to connect to its DB Console directly. Profiling UI links are in a separate section along with a `nodeID` selector to allow for easy targeting. [#71103][#71103] -- The absolute links on the [**Advanced Debug** page](https://www.cockroachlabs.com/docs/v21.2/ui-debug-pages) in DB Console have been updated to relative links. This will enable these links to work with the superuser dashboard in [Cloud Console](https://cockroachlabs.cloud). [#73067][#73067] -- When an error is encountered in the [**Statements**](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page), [**Transactions**](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page), or [**Sessions**](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page) page, the user can now click on a reload button to reload the page. [#73115][#73115] - -

      Bug fixes

      - -- Fixed a bug where [`GENERATED ... IDENTITY`](https://www.cockroachlabs.com/docs/v21.2/create-table#identity-columns) would panic when using a non-`INT` value during table creation. [#73029][#73029] -- Fixed a bug whereby setting the `CACHE` for a sequence to `1` was ignored. Before this change [`ALTER SEQUENCE ... CACHE 1`](https://www.cockroachlabs.com/docs/v21.2/alter-sequence) would succeed but would not modify the cache value. [#71449][#71449] -- Fixed a bug where a crash during [startup](https://www.cockroachlabs.com/docs/v21.2/cockroach-start) may cause all subsequent starts to fail. [#73124][#73124] -- Fixed an internal error that could occur during planning for some [set operations](https://www.cockroachlabs.com/docs/v21.2/selection-queries#set-operations) (i.e., `UNION`, `INTERSECT`, or `EXCEPT`) when at least one side of the set operation was ordered on a column that was not output by the set operation. This bug was first introduced in v21.2.0 and does not exist in prior versions. [#73147][#73147] -- Manually enqueueing ranges via the [DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-overview) will no longer crash nodes that contain an uninitialized replica for the enqueued range. [#73039][#73039] -- Fixed a crash with message "attempting to propose command writing below closed timestamp" that could occur, typically on overloaded systems experiencing non-cooperative lease transfers. [#73165][#73165] -- Fixed two bugs in the logic that optimized the number of spans to [backup](https://www.cockroachlabs.com/docs/v21.2/backup). [#73176][#73176] -- [**Transactions**](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) now using the correct selector for sort setting and filters. [#73291][#73291] -- The GC queue now respects the `kv.queue.process.guaranteed_time_budget` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings). [#70126][#70126] -- The `cockroach debug unsafe-remove-dead-replicas` tool was improved to handle the existence of learners. It will now produce the desired results in more circumstances. The tool remains dangerous and can irrevocably corrupt a cluster. [#70756][#70756] -- Fixed a rare internal error ("estimated row count must be non-zero"), which could occur when planning queries using a [GIN index](https://www.cockroachlabs.com/docs/v21.2/inverted-indexes). This error could occur if the histogram on the GIN index showed that there were no rows. [#73354][#73354] -- Fixed a bug where [`SHOW CREATE SCHEDULES`](https://www.cockroachlabs.com/docs/v21.2/show-create-schedule) was not redacting sensitive fields before displaying the [`CREATE SCHEDULE`](https://www.cockroachlabs.com/docs/v21.2/create-schedule-for-backup) query. [#71362][#71362] -- The `txnwaitqueue.pusher.waiting` metric no longer over-reports the number of pushing transactions in some cases. [#71744][#71744] -- Fixed a rare condition that could cause a range merge to get stuck waiting on itself. The symptom of this deadlock was a goroutine stuck in `handleMergeInProgressError` for tens of minutes. [#72050][#72050] -- [`RESTORE ... FROM LATEST IN`](https://www.cockroachlabs.com/docs/v21.2/restore) now works to restore the latest backup from a collection without needing to first inspect the collection to supply its actual path. [#73454][#73454] -- Prevent a panic in the parser when trying to parse the `.@n` tuple field deference syntax in the (invalid) n=0 case. [#73545][#73545] -- Fixed a bug where CockroachDB did not exit with the correct exit code when it ran out of disk space while the node was running. This behavior was new in v21.2 and was not behaving as intended. [#70853][#70853] -- Fixed certain bugs where [`CREATE TABLE AS`](https://www.cockroachlabs.com/docs/v21.2/create-table-as) or `CREATE MATERIALIZED VIEW` may panic if the [`SELECT` query](https://www.cockroachlabs.com/docs/v21.2/selection-queries) is an internal table requiring internal database state. [#73593][#73593] - -

      Performance improvements

      - -- The performance of transaction deadlock detection is now more stable even with significant [transaction contention](https://www.cockroachlabs.com/docs/v21.2/transactions#transaction-contention). [#71744][#71744] -- [Follower reads](https://www.cockroachlabs.com/docs/v21.2/follower-reads) that encounter many abandoned intents are now able to efficiently resolve those intents. This resolves an asymmetry where follower reads were previously less efficient at resolving abandoned intents than regular reads evaluated on a leaseholder. [#71884][#71884] - -

      Contributors

      - -This release includes 61 merged PRs by 30 authors. - -[#70126]: https://github.com/cockroachdb/cockroach/pull/70126 -[#70756]: https://github.com/cockroachdb/cockroach/pull/70756 -[#70853]: https://github.com/cockroachdb/cockroach/pull/70853 -[#71103]: https://github.com/cockroachdb/cockroach/pull/71103 -[#71362]: https://github.com/cockroachdb/cockroach/pull/71362 -[#71449]: https://github.com/cockroachdb/cockroach/pull/71449 -[#71500]: https://github.com/cockroachdb/cockroach/pull/71500 -[#71595]: https://github.com/cockroachdb/cockroach/pull/71595 -[#71744]: https://github.com/cockroachdb/cockroach/pull/71744 -[#71884]: https://github.com/cockroachdb/cockroach/pull/71884 -[#72050]: https://github.com/cockroachdb/cockroach/pull/72050 -[#72088]: https://github.com/cockroachdb/cockroach/pull/72088 -[#72797]: https://github.com/cockroachdb/cockroach/pull/72797 -[#73029]: https://github.com/cockroachdb/cockroach/pull/73029 -[#73039]: https://github.com/cockroachdb/cockroach/pull/73039 -[#73067]: https://github.com/cockroachdb/cockroach/pull/73067 -[#73115]: https://github.com/cockroachdb/cockroach/pull/73115 -[#73119]: https://github.com/cockroachdb/cockroach/pull/73119 -[#73124]: https://github.com/cockroachdb/cockroach/pull/73124 -[#73147]: https://github.com/cockroachdb/cockroach/pull/73147 -[#73165]: https://github.com/cockroachdb/cockroach/pull/73165 -[#73176]: https://github.com/cockroachdb/cockroach/pull/73176 -[#73190]: https://github.com/cockroachdb/cockroach/pull/73190 -[#73291]: https://github.com/cockroachdb/cockroach/pull/73291 -[#73354]: https://github.com/cockroachdb/cockroach/pull/73354 -[#73410]: https://github.com/cockroachdb/cockroach/pull/73410 -[#73454]: https://github.com/cockroachdb/cockroach/pull/73454 -[#73545]: https://github.com/cockroachdb/cockroach/pull/73545 -[#73593]: https://github.com/cockroachdb/cockroach/pull/73593 diff --git a/src/current/_includes/releases/v21.2/v21.2.4.md b/src/current/_includes/releases/v21.2/v21.2.4.md deleted file mode 100644 index bae42c4c66c..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.4.md +++ /dev/null @@ -1,129 +0,0 @@ -## v21.2.4 - -Release Date: January 10, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Security updates

      - -- It is now possible to pre-compute the hash of the password credentials of a SQL user client-side, and set the SQL user's password using the hash, so that CockroachDB never sees the password string in the clear in the SQL session. This auto-detection is subject to the new [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `server.user_login.store_client_pre_hashed_passwords.enabled`. This setting defaults to `true` (i.e., feature enabled). This feature is meant for use in automation/orchestration, when the control plane constructs passwords for users outside of CockroachDB, and there is an architectural desire to ensure that cleartext passwords are not transmitted/stored in-clear. Note: **when the client provides the password hash, CockroachDB cannot carry any checks on the internal structure of the password,** such as minimum length, special characters, etc. Should a deployment require such checks to be performed database-side, the operator would need to disable the mechanism via the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) named above. When upgrading a cluster from a previous version, to ensure that the feature remains disabled throughout the upgrade, use the following statement prior to the upgrade: `INSERT INTO system.settings(name, value, "valueType") VALUES('server.user_login.store_client_pre_hashed_passwords.enabled', 'false', 'b');`. (We do not recommend relying on the database to perform password checks. Our recommended deployment best practice is to implement credential definitions in a control plane / identity provider that is separate from the database.) [#73855][#73855] -- The `server.identity_map.configuration` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) allows a `pg_ident.conf` file to be uploaded to support dynamically remapping system identities (e.g., Kerberos or X.509 principals) to database usernames. This supports use cases where X.509 certificates must conform to organizational standards that mandate the use of Common Names that are not valid SQL usernames (e.g., `CN=carl@example.com` => `carl`). Mapping rules that result in the `root`, `node`, or other reserved usernames will result in an error when the client attempts to connect. [#74459][#74459] -- The `client_authentication_info` structured log message provides a new `"SystemIdentity"` field with the client-provided system identity. The existing `"User"` field will be populated after any Host-Based Authentication (HBA) rules have been selected and applied, which may include a system-identity to database-username mapping. [#74459][#74459] -- GSSAPI-based authentication can now use either the HBA `"map"` option or `"include_realm=0"` to map the incoming principal to a database username. Existing configurations will operate unchanged, however operators are encouraged to migrate from `"include_realm=0"` to `"map"` to avoid ambiguity in deployments where multiple realms are present. [#74459][#74459] -- Incoming system identities are normalized to lower-case before they are evaluated against any active identity-mapping HBA configuration. For example, an incoming GSSAPI principal `"carl@EXAMPLE.COM"` would only be matched by rules such as `"example carl@example.com carl"` or `"example /^(.*)@example.com$ \1"`. [#74459][#74459] - -

      {{ site.data.products.enterprise }} edition changes

      - -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) can be created with a new option called `metrics_label` which lets operators configure changefeeds to use a dedicated set of metrics for those changefeed(s) so that they can be monitored independently of other changefeed(s) in the system. [#73014][#73014] - -

      SQL language changes

      - -- The [`create_type_statements` table](https://www.cockroachlabs.com/docs/v21.2/crdb-internal#create_type_statements) now has an index on `descriptor_id`. [#73669][#73669] -- Added the new column `stmt` to the [`crdb_internal.(cluster|node)_distsql_flows` virtual table](https://www.cockroachlabs.com/docs/v21.2/crdb-internal#data-exposed-by-crdb_internal). It is populated on a best effort basis. [#73581][#73581] -- [Table backups](https://www.cockroachlabs.com/docs/v21.2/backup#backup-a-table-or-view) of [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview#regional-by-row-tables), [`REGIONAL BY TABLE`](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview#regional-tables), and [`GLOBAL` tables](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview#global-tables) are now supported. [#73087][#73087] -- The [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) called `sql.defaults.reorder_joins_limit` that controls the default for the [session setting](https://www.cockroachlabs.com/docs/v21.2/set-vars) `reorder_joins_limit` is now public and included in the [cluster setting docs](https://www.cockroachlabs.com/docs/v21.2/cluster-settings). [#73889][#73889] -- The `RULE` [privilege](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) was added for compatibility with PostgreSQL. It is impossible to grant it, but it is supported as a parameter of the `has_table_privilege` [function](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#compatibility-functions). [#74065][#74065] -- The [`CREATE ROLE`](https://www.cockroachlabs.com/docs/v21.2/create-role) and [`ALTER ROLE`](https://www.cockroachlabs.com/docs/v21.2/alter-role) statements now accept password hashes computed by the client app. For example: `CREATE USER foo WITH PASSWORD 'CRDB-BCRYPT$2a$10$.....'`. This feature is not meant for use by human users / in interactive sessions; it is meant for use in programs, using the computation algorithm described below. This auto-detection can be disabled by changing the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `server.user_login.store_client_pre_hashed_passwords.enabled` to `false`. This design mimics the behavior of PostgreSQL, which recognizes pre-computed password hashes when presented to the regular [`PASSWORD` option](https://www.postgresql.org/docs/14/sql-createrole.html). The password hashes are auto-detected based on their lexical structure. For example, any password that starts with the prefix `CRDB-BCRYPT$`, followed by a valid encoding of a bcrypt hash (as detailed below), is considered a candidate password hash. To ascertain whether a password hash will be recognized as such, orchestration code can use the new [built-in function](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#system-info-functions) `crdb_internal.check_password_hash_format()`. [#73855][#73855] - - CockroachDB only recognizes password hashes computed using bcrypt, as follows (we detail this algorithm so that orchestration software can implement their own password hash computation, separate from the database): - 1. Take the cleartext password string. - 1. Append the following byte array to the password: `e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855` (These are 32 hex-encoded bytes.) - 1. Choose a bcrypt cost. (CockroachDB servers use cost 10 by default.) - 1. Generate a bcrypt hash of the string generated at step 2 with the cost chosen at step 3, as per [https://wikipedia.org/wiki/Bcrypt](https://wikipedia.org/wiki/Bcrypt) or [https://bcrypt.online/](https://bcrypt.online/) Note that CockroachDB only supports hashes computed using bcrypt version 2a. - 1. Encode the hash into the format recognized by CockroachDB: the string `CRDB-BCRYPT`, followed by the standard bcrypt hash encoding (`$2a$...`). - - Summary: - - | Hash method | Recognized by `check_password_hash_format()` | ALTER/CREATE USER WITH PASSWORD | - |-----------------+----------------------------------------------+-----------------------------------------------------------------------------| - | `crdb-bcrypt` | yes (`CRDB-BCRYPT$2a$...`) | recognized if enabled via [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) | - | `scram-sha-256` | yes (`SCRAM-SHA-256$4096:...`) | not implemented yet (issue [#42519][#42519]) | - | `md5` | yes (`md5...`) | obsolete, will likely not be implemented | - -- Backported the `server.user_login.store_client_pre_hashed_passwords.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) to v21.2. The backported default value in v21.2 is `false`. In v22.1 the default will be `true`. [#73855][#73855] - -

      Operational changes

      - -- Added a new [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `bulkio.ingest.flush_delay` to act as a last-resort option to manually slow bulk-writing processes if needed for cluster stability. This should only be used if there is no better suited back-pressure mechanism available for the contended resource. [#73758][#73758] -- The [license expiry metric](https://www.cockroachlabs.com/docs/v21.2/licensing-faqs#monitor-for-license-expiry) is now available in the [DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-overview) and includes the expected `HELP` and `TYPE` annotations in [the Prometheus output on `_status/vars`](https://www.cockroachlabs.com/docs/v21.2/monitoring-and-alerting#prometheus-endpoint). [#73859][#73859] - -

      DB Console changes

      - -- Added new formatting functions to create summarized queries for [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/selection-queries), [`INSERT`](https://www.cockroachlabs.com/docs/v21.2/insert), and [`UPDATE`](https://www.cockroachlabs.com/docs/v21.2/update) statements. Also added new metadata fields, which will later be used to pass this information to the front-end [Statements page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page). [#73661][#73661] -- The [jobs overview table in DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-jobs-page) now shows when jobs have the status "reverting", and shows the badge "retrying" when running or reverting jobs are also retrying. Hovering over the status for a "retrying" job will show the "Next execution time" in UTC. Two new columns, "Last Execution Time (UTC)" and "Execution Count", were also added to the jobs overview table in DB Console, and the "Status" column was moved left to the second column in the table. The `status` query parameter in the `/jobs` endpoint now supports the values `reverting` and `retrying`. [#73624][#73624] -- Added new statement summaries to the [Statements page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page). This applies for [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/selection-queries), [`INSERT`](https://www.cockroachlabs.com/docs/v21.2/insert)/[`UPSERT`](https://www.cockroachlabs.com/docs/v21.2/upsert), and [`UPDATE`](https://www.cockroachlabs.com/docs/v21.2/update) statements, and will enable them to be more detailed and less ambiguous than our previous formats. [#73661][#73661] -- Added new summarized formats for [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/selection-queries), [`INSERT`](https://www.cockroachlabs.com/docs/v21.2/insert)/[`UPSERT`](https://www.cockroachlabs.com/docs/v21.2/upsert), and [`UPDATE`](https://www.cockroachlabs.com/docs/v21.2/update) statements on the [Sessions page](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page) and the [Transactions page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page), to be consistent with the [Statements page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page). Show "Mean rows written" as a metric for all statement types on the [Statements page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page), instead of hiding this metric for [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/selection-queries) statements. [#73661][#73661] -- Made visual improvements to the [DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-overview). [#73386][#73386] -- Updated text of filter drop-downs in the [DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-overview), replacing "usage" with "statement" for consistency. [#74421][#74421] - -

      Bug fixes

      - -- Fixed a bug which caused corruption of [partial indexes](https://www.cockroachlabs.com/docs/v21.2/partial-indexes), which could cause incorrect query results. The bug was only present when two or more partial indexes in the same table had identical [`WHERE` clauses](https://www.cockroachlabs.com/docs/v21.2/select-clause#where-clause). This bug has been present since [v21.1.0]({% link releases/v21.1.md %}#v21-1-0). For more information, see [Technical Advisory 74385](https://www.cockroachlabs.com/docs/advisories/a74385). [#74471][#74471] -- Fixed an internal error `"empty Datums being compared to other"` that could occur during planning for some [`SELECT`](https://www.cockroachlabs.com/docs/v21.2/selection-queries) queries over tables that included a `DEFAULT` partition value in a [`PARTITION BY LIST`](https://www.cockroachlabs.com/docs/v21.2/partition-by#define-a-list-partition-on-a-table-or-secondary-index) clause. This bug was present since [v21.1.0]({% link releases/v21.1.md %}#v21-1-0). This bug does not exist in CockroachDB v20.2.x and earlier. [#73664][#73664] -- Fixed a bug that could cause a CockroachDB node to deadlock upon startup in extremely rare cases. If encountered, a stack trace generated by `SIGQUIT` would show the function `makeStartLine()` near the top. This bug had existed since [v21.1.0]({% link releases/v21.1.md %}#v21-1-0). [#71407][#71407] -- Fixed a bug where CockroachDB could crash when reading a [secondary index with a `STORING` clause](https://www.cockroachlabs.com/docs/v21.2/indexes#storing-columns) in reverse direction (i.e., with `ORDER BY col DESC`). This bug was introduced in [v21.2.0]({% link releases/v21.2.md %}#v21-2-0). [#73699][#73699] -- Fixed a bug where the correct index count was not displayed in the **Indexes** column on the [Databases page](https://www.cockroachlabs.com/docs/v21.2/ui-databases-page) of the DB Console. [#73747][#73747] -- Fixed a bug where a failed [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) to a non-empty table would be unable to clean up the partially imported data when run in a [serverless cluster](https://www.cockroachlabs.com/docs/v21.2/deploy-lambda-function) because the operation to do so was incorrectly denied for tenants. [#73541][#73541] -- Fixed a bug in database and schema [restore](https://www.cockroachlabs.com/docs/v21.2/restore) cleanup that results in a dangling descriptor entry on job failure. [#73411][#73411] -- Fixed a bug which allowed queries to reference internal columns created by the system for [expression indexes](https://www.cockroachlabs.com/docs/v21.2/expression-indexes). These columns, which had names prefixed with `crdb_internal_idx_expr`, can no longer be referenced in queries. This bug was present since version [v21.2.0]({% link releases/v21.2.md %}#v21-2-0) when expression indexes were released. [#74285][#74285] -- Fixed a bug with ungraceful shutdown of distributed queries in some rare cases. "Ungraceful" here means due to a `statement_timeout` (most likely) or because a node crashed. [#73958][#73958] -- Fixed a bug where CockroachDB could return a spurious "context canceled" error for a query that actually succeeded in extremely rare cases. [#73958][#73958] -- Fixed a bug where CockroachDB could encounter an internal error when executing queries with multiple window functions and one of those functions returned an [`INT2` or `INT4` type](https://www.cockroachlabs.com/docs/v21.2/int#names-and-aliases). [#74311][#74311] -- Fixed a bug where it was possible for [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-zip) and the log file viewer in the [DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-overview) to observe incomplete log entries at the end of log files—especially the log file currently being written to by the CockroachDB process. This bug was introduced in a very early version of CockroachDB. [#74153][#74153] -- Fixed a bug where [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) would emit _NULL_ values for [virtual computed columns](https://www.cockroachlabs.com/docs/v21.2/computed-columns#virtual-computed-columns). Previously, the changefeeds would crash if these were set to `NOT NULL`. [#74095][#74095] -- Internal columns created by the system to support [expression indexes](https://www.cockroachlabs.com/docs/v21.2/expression-indexes) are now omitted from the output of [`SHOW COLUMNS`](https://www.cockroachlabs.com/docs/v21.2/show-columns) statements and the [`information_schema.columns` table](https://www.cockroachlabs.com/docs/v21.2/information-schema#columns). [#73540][#73540] -- Fixed a bug where [`IMPORT TABLE ... PGDUMP DATA`](https://www.cockroachlabs.com/docs/v21.2/migrate-from-postgres) with a [`COPY FROM`](https://www.cockroachlabs.com/docs/v21.2/copy-from) statement in the dump file that had fewer target columns than the inline table definition would result in a nil pointer exception. [#74435][#74435] -- Fixed a bug where escape character processing was missing from constraint span generation, which resulted in incorrect results when doing escaped [`LIKE` lookups](https://www.cockroachlabs.com/docs/v21.2/scalar-expressions#string-pattern-matching). [#74259][#74259] -- Fixed a bug affecting the [redactability of logging tags in output log entries](https://www.cockroachlabs.com/docs/v21.2/configure-logs#redact-logs). This bug was introduced in [the v21.2.0 release]({% link releases/v21.2.md %}#v21-2-0). [#74155][#74155] - -

      Performance improvements

      - -- Bulk ingestion of small write batches (e.g., [index](https://www.cockroachlabs.com/docs/v21.2/indexes) backfill into a large number of [ranges](https://www.cockroachlabs.com/docs/v21.2/architecture/distribution-layer#overview)) is now throttled, to avoid buildup of read amplification and associated performance degradation. Concurrency is controlled by the new [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `kv.bulk_io_write.concurrent_addsstable_as_writes_requests`. [#74071][#74071] - -

      Miscellaneous

      - -- Added admit and commit latency metrics to [changefeeds](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview). [#73014][#73014] -- Updated and improved the set of sink specific [changefeed](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) metrics. [#73014][#73014] - -

      Contributors

      - -This release includes 57 merged PRs by 31 authors. - -[#42519]: https://github.com/cockroachdb/cockroach/pull/42519 -[#71407]: https://github.com/cockroachdb/cockroach/pull/71407 -[#73014]: https://github.com/cockroachdb/cockroach/pull/73014 -[#73087]: https://github.com/cockroachdb/cockroach/pull/73087 -[#73386]: https://github.com/cockroachdb/cockroach/pull/73386 -[#73411]: https://github.com/cockroachdb/cockroach/pull/73411 -[#73540]: https://github.com/cockroachdb/cockroach/pull/73540 -[#73541]: https://github.com/cockroachdb/cockroach/pull/73541 -[#73581]: https://github.com/cockroachdb/cockroach/pull/73581 -[#73624]: https://github.com/cockroachdb/cockroach/pull/73624 -[#73661]: https://github.com/cockroachdb/cockroach/pull/73661 -[#73664]: https://github.com/cockroachdb/cockroach/pull/73664 -[#73669]: https://github.com/cockroachdb/cockroach/pull/73669 -[#73696]: https://github.com/cockroachdb/cockroach/pull/73696 -[#73699]: https://github.com/cockroachdb/cockroach/pull/73699 -[#73747]: https://github.com/cockroachdb/cockroach/pull/73747 -[#73758]: https://github.com/cockroachdb/cockroach/pull/73758 -[#73855]: https://github.com/cockroachdb/cockroach/pull/73855 -[#73859]: https://github.com/cockroachdb/cockroach/pull/73859 -[#73889]: https://github.com/cockroachdb/cockroach/pull/73889 -[#73958]: https://github.com/cockroachdb/cockroach/pull/73958 -[#74065]: https://github.com/cockroachdb/cockroach/pull/74065 -[#74071]: https://github.com/cockroachdb/cockroach/pull/74071 -[#74095]: https://github.com/cockroachdb/cockroach/pull/74095 -[#74125]: https://github.com/cockroachdb/cockroach/pull/74125 -[#74153]: https://github.com/cockroachdb/cockroach/pull/74153 -[#74155]: https://github.com/cockroachdb/cockroach/pull/74155 -[#74204]: https://github.com/cockroachdb/cockroach/pull/74204 -[#74259]: https://github.com/cockroachdb/cockroach/pull/74259 -[#74285]: https://github.com/cockroachdb/cockroach/pull/74285 -[#74311]: https://github.com/cockroachdb/cockroach/pull/74311 -[#74421]: https://github.com/cockroachdb/cockroach/pull/74421 -[#74435]: https://github.com/cockroachdb/cockroach/pull/74435 -[#74459]: https://github.com/cockroachdb/cockroach/pull/74459 -[#74471]: https://github.com/cockroachdb/cockroach/pull/74471 -[aa1e94d62]: https://github.com/cockroachdb/cockroach/commit/aa1e94d62 -[bf543d36b]: https://github.com/cockroachdb/cockroach/commit/bf543d36b -[cb1a3ffdd]: https://github.com/cockroachdb/cockroach/commit/cb1a3ffdd -[d3eb7c624]: https://github.com/cockroachdb/cockroach/commit/d3eb7c624 diff --git a/src/current/_includes/releases/v21.2/v21.2.5.md b/src/current/_includes/releases/v21.2/v21.2.5.md deleted file mode 100644 index 23cf9138890..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.5.md +++ /dev/null @@ -1,125 +0,0 @@ -## v21.2.5 - -Release date: February 7, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Enterprise edition changes

      - -- Redacted more potentially-sensitive URI elements from [changefeed](https://www.cockroachlabs.com/docs/v21.2/change-data-capture-overview) job descriptions. This is a breaking change for workflows involving copying URIs. As an alternative, the unredacted URI may be accessed from the [jobs](https://www.cockroachlabs.com/docs/v21.2/show-jobs) table directly. [#75187][#75187] - -

      SQL language changes

      - -- New [role](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#roles) `VIEWACTIVITYREDACTED` that works similar as `VIEWACTIVITY` but restricts the usage of the [statements diagnostics bundle](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#diagnostics). It is possible for a user to have both roles (`VIEWACTIVITY` and `VIEWACTIVITYREDACTED`), but the role `VIEWACTIVITYREDACTED` takes precedent on restrictions. [#74862][#74862] -- New role `NOSQLLOGIN` (and its inverse `SQLLOGIN`), which restricts [SQL CLI login](https://www.cockroachlabs.com/docs/v21.2/cockroach-sql#prerequisites) ability for a user while retaining their ability to log in to the [DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-overview) (as opposed to `NOLOGIN` which restricts both SQL and DB Console). Without any role options all login behavior remains permitted as it does today. OIDC logins to the DB Console continue to be permitted with `NOSQLLOGIN` set. [#75185][#75185] -- The default [SQL stats](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer#table-statistics) flush interval is now 10 minutes. [#75524][#75524] - -

      Operational changes

      - -- The meaning of `sql.distsql.max_running_flows` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) has been extended so that when the value is negative, it would be multiplied by the number of CPUs on the node to get the maximum number of concurrent remote flows on the node. The default value is -128, meaning that on a 4 CPU machine we will have up to 512 concurrent remote [DistSQL](https://www.cockroachlabs.com/docs/v21.2/explain-analyze#distsql-option) flows, but on a 8 CPU machine up to 1024. The previous default was 500. [#75509][#75509] - - -

      Command-line changes

      - -- Not finding the right certs in the [certs dir](https://www.cockroachlabs.com/docs/v21.2/cockroach-cert#certificate-directory) or not [specifying a certs dir](https://www.cockroachlabs.com/docs/v21.2/cockroach-start#security) or certificate path now falls back on checking server CA using Go's TLS code to find the certificates in the OS trust store. If no matching certificate is found, an x509 error will occur announcing that the certificate is signed by an unknown authority. [#74544][#74544] -- Fixed the CLI help text for [`ALTER DATABASE`](https://www.cockroachlabs.com/docs/v21.2/alter-database) to show correct options for [`ADD REGION`](https://www.cockroachlabs.com/docs/v21.2/add-region) and [`DROP REGION`](https://www.cockroachlabs.com/docs/v21.2/drop-region), and include some missing options such as [`CONFIGURE ZONE`](https://www.cockroachlabs.com/docs/v21.2/configure-zone). [#75067][#75067] -- The [`debug zip`](https://www.cockroachlabs.com/docs/v21.2/cockroach-debug-zip) now scrapes the cluster-wide KV replication reports in the output. [#75794][#75794] -- The `debug zip` now includes the `raw system.settings` table. This table makes it possible to determine whether a cluster setting has been explicitly set. [#75794][#75794] - -

      DB Console changes

      - -- The **clear SQL stats** links on the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Transactions](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) pages were relabeled **reset SQL stats**, for consistency with the language in the [SQL shell](https://www.cockroachlabs.com/docs/v21.2/cockroach-sql). [#74417][#74417] -- The [Terminate Session and Terminate Statement buttons](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page#session-details) are available to be enabled in the DB Console. [#74530][#74530] -- In the [Statement Details](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-details-page) page, replaced raw statement with statement ID in the page URL. [#75463][#75463] -- Removed `$ internal` as one of the [app filter options](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#filter) under the Statements and Transactions page filters. [#75485][#75485] -- Changed the order of tabs under [SQL Activity](https://www.cockroachlabs.com/docs/v21.2/ui-overview#sql-activity) page to be Statements, Transactions, and Sessions. [#75503][#75503] -- If the user has the role `VIEWACTIVITYREDACTED`, the [statement diagnostics bundle](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#diagnostics) info on [Statements page (Diagnostics column)](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statements-table), [Statement Details page (diagnostics tab)](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#diagnostics) and [Advanced Debug page (diagnostics history)](https://www.cockroachlabs.com/docs/v21.2/ui-debug-pages#reports) is hidden. [#75521][#75521] -- In the Statements and Transactions pages, loading and error pages no longer obscure the page filter controls. [#75527][#75527] - -

      Bug fixes

      - -- Fixed a panic when attempting to access the hottest ranges (e.g., via the `/_status/hotranges` [endpoint](https://www.cockroachlabs.com/docs/v21.2/cluster-api)) before initial statistics had been gathered. [#74515][#74515] -- A doubly nested [enum](https://www.cockroachlabs.com/docs/v21.2/enum) in a [DistSQL](https://www.cockroachlabs.com/docs/v21.2/explain-analyze#distsql-option) query no longer causes node crashing panic. [#74490][#74490] -- Servers no longer crash due to panics in HTTP handlers. [#74539][#74539] -- When [foreign keys](https://www.cockroachlabs.com/docs/v21.2/foreign-key) are included inside an [`ADD COLUMN`](https://www.cockroachlabs.com/docs/v21.2/add-column) statement and multiple columns were added in a single statement, the first added column no longer has the foreign key applied or an error is no longer generated based on the wrong column. [#74528][#74528] -- When `sslmode=require` is set in a [connection string](https://www.cockroachlabs.com/docs/v21.2/connection-parameters#additional-connection-parameters) certificate path checking is now bypassed. [#74544][#74544] -- Uninitialized [replicas](https://www.cockroachlabs.com/docs/v21.2/architecture/overview#architecture-replica) that are abandoned after an unsuccessful snapshot no longer perform periodic background work, so they no longer have a non-negligible cost. [#74185][#74185] -- Fixed a bug where a [backed up](https://www.cockroachlabs.com/docs/v21.2/take-full-and-incremental-backups) `defaultdb` that is configured to be MR, is not restored as a [multi-region](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview) database on cluster restore. [#74607][#74607] -- Fixed a bug where deleting data via [schema changes](https://www.cockroachlabs.com/docs/v21.2/online-schema-changes) (e.g., when dropping an index or table) could fail with a "command too large" error. [#74798][#74798] -- CockroachDB no longer returns incorrect results or internal errors on queries with [window functions](https://www.cockroachlabs.com/docs/v21.2/window-functions) returning `INT`, `FLOAT`, `BYTES`, `STRING`, `UUID`, or `JSON` [type](https://www.cockroachlabs.com/docs/v21.2/data-types) when the disk spilling occurred. [#74589][#74589] -- CockroachDB no longer incorrectly calculates `MIN`/`MAX` when used as [window functions](https://www.cockroachlabs.com/docs/v21.2/window-functions) in some cases after spilling to disk. [#74589][#74589] -- Fixed panics possible in some distributed queries using [enums](https://www.cockroachlabs.com/docs/v21.2/enum) in [join](https://www.cockroachlabs.com/docs/v21.2/joins) predicates. [#74733][#74733] -- CockroachDB no longer encounters an internal error when performing [`UPSERT`](https://www.cockroachlabs.com/docs/v21.2/upsert) or [`INSERT ... ON CONFLICT`](https://www.cockroachlabs.com/docs/v21.2/insert#on-conflict-clause) queries in some cases when the new rows contained `NULL` values (either `NULL`s explicitly specified or `NULL`s used since some columns were omitted). [#74872][#74872] -- Internal errors when altering the [primary key](https://www.cockroachlabs.com/docs/v21.2/primary-key) of a table no longer occur. The bug was only present if the table had a [partial index](https://www.cockroachlabs.com/docs/v21.2/partial-indexes) with a predicate that referenced a [virtual computed column](https://www.cockroachlabs.com/docs/v21.2/computed-columns). [#75183][#75183] -- Fixed a bug that caused errors in rare cases when trying to divide [`INTERVAL`](https://www.cockroachlabs.com/docs/v21.2/interval) values by `INT4` or `INT2` [values](https://www.cockroachlabs.com/docs/v21.2/int). [#75079][#75079] -- Fixed a bug that could occur when a [`TIMETZ`](https://www.cockroachlabs.com/docs/v21.2/time) column was indexed, and a query predicate constrained that column using a < or > operator with a `TIMETZ` constant. If the column contained values with time zones that did not match the time zone of the `TIMETZ` constant, it was possible that not all matching values could be returned by the query. Specifically, the results may not have included values within one microsecond of the predicate's absolute time. This bug exists on all versions of 20.1, 20.2, 21.1, and 21.2 prior to this release. [#75172][#75172] -- Fixed an internal error, "estimated row count must be non-zero", that could occur during planning for queries over a table with a `TIMETZ` column. This error was due to a faulty assumption in the [statistics](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer#table-statistics) estimation code about ordering of `TIMETZ` values, which has now been fixed. The error could occur when `TIMETZ` values used in the query had a different time zone offset than the `TIMETZ` values stored in the table. [#75172][#75172] -- [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) now inserts a `system.namespace` entry for synthetic [public schemas](https://www.cockroachlabs.com/docs/v21.2/show-schemas). [#74759][#74759] -- A bug has been fixed that caused internal errors in queries with set operations, like [`UNION`](https://www.cockroachlabs.com/docs/v21.2/selection-queries#union-combine-two-queries), when corresponding columns on either side of the set operation were not the same. This error only occurred with a limited set of types. This bug is present in versions 20.2.6+, 21.1.0+, and 21.2.0+. [#75276][#75276] -- Fixed [SQL Activity](https://www.cockroachlabs.com/docs/v21.2/ui-overview#sql-activity) pages crashing when a column was sorted by the 3rd time. [#75486][#75486] -- Updated the `String()` function of `roleOption` to add a space on the role [`VALID UNTIL`](https://www.cockroachlabs.com/docs/v21.2/create-role#role-options). [#75494][#75494] -- In particular cases, some queries that involve [a scan that returns many results](https://www.cockroachlabs.com/docs/v21.2/performance-best-practices-overview#table-scan-best-practices) and which includes lookups for individual keys were not returning all results from the table. [#75512][#75512] -- When adding a [hash-sharded index](https://www.cockroachlabs.com/docs/v21.2/hash-sharded-indexes) to an existing table, traffic could overwhelm a single range of the index before it is split into more ranges for shards as range size grows. The schema changer now pre-splits ranges on shard boundaries before the index becomes writable. The `sql.hash_sharded_range_pre_split.max` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) is the upper bound on the number of ranges to have. If the bucket count of the defined index is less than the cluster setting, the bucket count will be the number of pre-split ranges. [#75474][#75474] -- If multiple columns are added to a table inside a transaction, then none of the columns are backfilled if the last column did not require a backfill. [#75507][#75507] -- `crdb_internal.deserialize_session` now checks that the `session_user` has the privilege to [`SET ROLE`](https://www.cockroachlabs.com/docs/v21.2/set-vars#special-syntax-cases) to the `current_user` before changing the session settings. [#75600][#75600] -- CockroachDB no longer incorrectly reports the `KV bytes read` statistic in [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.2/explain-analyze) output. The bug is present only in 21.2.x versions. [#75260][#75260] -- The `options` query parameter is no longer removed when using the `\c` command in the [SQL shell](https://www.cockroachlabs.com/docs/v21.2/cockroach-sql) to reconnect to the cluster. [#75765][#75765] -- The `CancelSession` [endpoint](https://www.cockroachlabs.com/docs/v21.2/cluster-api) now correctly propagates gateway metadata when forwarding requests. [#75832][#75832] -- Fixed a bug when granting incompatible database privilege to default privilege with non-lowercase database names. [#75580][#75580] - -

      Performance improvements

      - -- [Rangefeed](https://www.cockroachlabs.com/docs/v21.2/use-changefeeds#enable-rangefeeds) streams now use separate HTTP connections when `kv.rangefeed.use_dedicated_connection_class.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) is turned on. Using a separate connection class reduces the possibility of out of memory errors when running rangefeeds against very large tables. The connection window size for rangefeed can be adjusted via `COCKROACH_RANGEFEED_INITIAL_WINDOW_SIZE` [environment variable](https://www.cockroachlabs.com/docs/v21.2/cockroach-commands#environment-variables), whose default is 128KB. [#74456][#74456] -- Incremental [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup)s now use less memory to verify coverage of prior backups. [#74588][#74588] -- The merging of incremental backup layers during [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) now uses a simpler and less memory intensive algorithm. [#74593][#74593] - -

      Contributors

      - -This release includes 75 merged PRs by 35 authors. - -[#74185]: https://github.com/cockroachdb/cockroach/pull/74185 -[#74417]: https://github.com/cockroachdb/cockroach/pull/74417 -[#74456]: https://github.com/cockroachdb/cockroach/pull/74456 -[#74490]: https://github.com/cockroachdb/cockroach/pull/74490 -[#74515]: https://github.com/cockroachdb/cockroach/pull/74515 -[#74528]: https://github.com/cockroachdb/cockroach/pull/74528 -[#74530]: https://github.com/cockroachdb/cockroach/pull/74530 -[#74539]: https://github.com/cockroachdb/cockroach/pull/74539 -[#74544]: https://github.com/cockroachdb/cockroach/pull/74544 -[#74588]: https://github.com/cockroachdb/cockroach/pull/74588 -[#74589]: https://github.com/cockroachdb/cockroach/pull/74589 -[#74593]: https://github.com/cockroachdb/cockroach/pull/74593 -[#74607]: https://github.com/cockroachdb/cockroach/pull/74607 -[#74677]: https://github.com/cockroachdb/cockroach/pull/74677 -[#74733]: https://github.com/cockroachdb/cockroach/pull/74733 -[#74759]: https://github.com/cockroachdb/cockroach/pull/74759 -[#74798]: https://github.com/cockroachdb/cockroach/pull/74798 -[#74862]: https://github.com/cockroachdb/cockroach/pull/74862 -[#74872]: https://github.com/cockroachdb/cockroach/pull/74872 -[#75067]: https://github.com/cockroachdb/cockroach/pull/75067 -[#75079]: https://github.com/cockroachdb/cockroach/pull/75079 -[#75164]: https://github.com/cockroachdb/cockroach/pull/75164 -[#75172]: https://github.com/cockroachdb/cockroach/pull/75172 -[#75183]: https://github.com/cockroachdb/cockroach/pull/75183 -[#75185]: https://github.com/cockroachdb/cockroach/pull/75185 -[#75187]: https://github.com/cockroachdb/cockroach/pull/75187 -[#75276]: https://github.com/cockroachdb/cockroach/pull/75276 -[#75463]: https://github.com/cockroachdb/cockroach/pull/75463 -[#75485]: https://github.com/cockroachdb/cockroach/pull/75485 -[#75486]: https://github.com/cockroachdb/cockroach/pull/75486 -[#75494]: https://github.com/cockroachdb/cockroach/pull/75494 -[#75503]: https://github.com/cockroachdb/cockroach/pull/75503 -[#75509]: https://github.com/cockroachdb/cockroach/pull/75509 -[#75512]: https://github.com/cockroachdb/cockroach/pull/75512 -[#75521]: https://github.com/cockroachdb/cockroach/pull/75521 -[#75524]: https://github.com/cockroachdb/cockroach/pull/75524 -[#75527]: https://github.com/cockroachdb/cockroach/pull/75527 -[#75531]: https://github.com/cockroachdb/cockroach/pull/75531 -[#75580]: https://github.com/cockroachdb/cockroach/pull/75580 -[#75600]: https://github.com/cockroachdb/cockroach/pull/75600 -[#75765]: https://github.com/cockroachdb/cockroach/pull/75765 -[#75794]: https://github.com/cockroachdb/cockroach/pull/75794 -[#75832]: https://github.com/cockroachdb/cockroach/pull/75832 -[#75474]: https://github.com/cockroachdb/cockroach/pull/75474 -[#75507]: https://github.com/cockroachdb/cockroach/pull/75507 -[#75260]: https://github.com/cockroachdb/cockroach/pull/75260 diff --git a/src/current/_includes/releases/v21.2/v21.2.6.md b/src/current/_includes/releases/v21.2/v21.2.6.md deleted file mode 100644 index 1291a64c585..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.6.md +++ /dev/null @@ -1,43 +0,0 @@ -## v21.2.6 - -Release date: February 22, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Enterprise edition changes

      - -- [Kafka sink](https://www.cockroachlabs.com/docs/v21.2/changefeed-sinks#kafka) supports larger messages, up to 2GB in size. [#76321][#76321] - -

      SQL language changes

      - -- Added new [built-in functions](https://www.cockroachlabs.com/docs/v21.2/functions-and-operators#built-in-functions) called `crdb_internal.revalidate_unique_constraint`, `crdb_internal.revalidate_unique_constraints_in_table`, and `crdb_internal.revalidate_unique_constraints_in_all_tables`, which can be used to revalidate existing [unique constraints](https://www.cockroachlabs.com/docs/v21.2/unique). The different variations support validation of a single constraint, validation of all unique constraints in a table, and validation of all unique constraints in all tables in the current database, respectively. If any constraint fails validation, the functions will return an error with a hint about which data caused the constraint violation. These violations can then be resolved manually by updating or deleting the rows in violation. This will be useful to users who think they may have been affected by [#73024][#73024]. [#75858][#75858] -- S3 URIs used in [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup), [`EXPORT`](https://www.cockroachlabs.com/docs/v21.2/export), or [`CREATE CHANGEFEED`](https://www.cockroachlabs.com/docs/v21.2/create-changefeed) statements can now include the query parameter `S3_STORAGE_CLASS` to configure the storage class used when that job creates objects in the designated S3 bucket. [#75608][#75608] -- Non-admin users can now use the [`SHOW RANGES`](https://www.cockroachlabs.com/docs/v21.2/show-ranges) statement if the [`ZONECONFIG`](https://www.cockroachlabs.com/docs/v21.2/grant#supported-privileges) privilege is granted. [#76071][#76071] -- `ST_MakePolygon` is now disallowed from making empty polygons from empty linestrings. This is not allowed in PostGIS. [#76255][#76255] - -

      Bug fixes

      - -- Fixed a bug where ownership information for sequence descriptors and column descriptors was incorrect. To elaborate, a sequence is created when a column is defined as the [`SERIAL`](https://www.cockroachlabs.com/docs/v21.2/serial) type and the `serial_normalization` session variable is set to `sql_sequence`. In this case, the sequence is owned by the column and the table where the column exists. The sequence should be dropped when the owner table/column is dropped, which is the PostgreSQL behavior. The bug caused CockroachDB never to set ownership information correctly, only the dependency relationship, which caused the sequence to stay even though the owner table/column did not exist anymore. This is now fixed. [#75704][#75704] -- Fixed a bug that could cause nodes to crash when truncating abnormally large Raft logs. [#75979][#75979] -- The [DB Console](https://www.cockroachlabs.com/docs/v21.2/ui-overview) [**Databases**](https://www.cockroachlabs.com/docs/v21.2/ui-databases-page) page now shows stable, consistent values for database sizes. [#76324][#76324] - -

      Performance improvements

      - -- Sorting data with bytes-like types ([`BYTES`](https://www.cockroachlabs.com/docs/v21.2/bytes), [`STRING`](https://www.cockroachlabs.com/docs/v21.2/string), [`JSON`/`JSONB`](https://www.cockroachlabs.com/docs/v21.2/jsonb), [`UUID`](https://www.cockroachlabs.com/docs/v21.2/uuid)) when the [`LIMIT`](https://www.cockroachlabs.com/docs/v21.2/limit-offset) clause is specified has now become more predictable. [#75847][#75847] - -

      Contributors

      - -This release includes 25 merged PRs by 21 authors. - -[#75608]: https://github.com/cockroachdb/cockroach/pull/75608 -[#73024]: https://github.com/cockroachdb/cockroach/issues/73024 -[#75704]: https://github.com/cockroachdb/cockroach/pull/75704 -[#75847]: https://github.com/cockroachdb/cockroach/pull/75847 -[#75858]: https://github.com/cockroachdb/cockroach/pull/75858 -[#75893]: https://github.com/cockroachdb/cockroach/pull/75893 -[#75979]: https://github.com/cockroachdb/cockroach/pull/75979 -[#76071]: https://github.com/cockroachdb/cockroach/pull/76071 -[#76250]: https://github.com/cockroachdb/cockroach/pull/76250 -[#76255]: https://github.com/cockroachdb/cockroach/pull/76255 -[#76321]: https://github.com/cockroachdb/cockroach/pull/76321 -[#76324]: https://github.com/cockroachdb/cockroach/pull/76324 diff --git a/src/current/_includes/releases/v21.2/v21.2.7.md b/src/current/_includes/releases/v21.2/v21.2.7.md deleted file mode 100644 index 79bc3c86b7c..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.7.md +++ /dev/null @@ -1,86 +0,0 @@ -## v21.2.7 - -Release Date: March 14, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Security updates

      - -- When the `sql.telemetry.query_sampling.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) is enabled, SQL names and client IP addresses are no longer redacted in telemetry logs. [#77072][#77072] - -

      Enterprise edition changes

      - -- Currently executing schedules are cancelled immediately when the jobs scheduler is disabled. [#77313][#77313] - -

      SQL language changes

      - -- When dropping a user that has default [privileges](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges), the error message now includes which database and schema the default privileges are defined in. Additionally, a hint is now given to show exactly how to remove the default privileges. [#77142][#77142] -- Fixed a bug where `crdb_internal.default_privileges` would incorrectly show default [privileges](https://www.cockroachlabs.com/docs/v21.2/security-reference/authorization#privileges) for databases where the default privilege was not actually defined. [#77304][#77304] - -

      Operational changes

      - -- Operators who wish to access HTTP endpoints of a cluster through a proxy can now request specific node IDs through a `remote_node_id` query parameter or cookie with the value set to the node IDs they would like to proxy the connection to. [#76694][#76694] -- Added the [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `bulkio.backup.resolve_destination_in_job.enabled`, which can be used to delay resolution of a backup's destination until the job starts running. [#76816][#76816] - -

      DB Console changes

      - -- Open SQL Transactions and Active SQL Transactions are now downsampled using `MAX` instead of `AVG` and will more accurately reflect narrow spikes in transaction counts when looking at downsampled data. [#76688][#76688] -- DB Console requests can be routed to arbitrary nodes in the cluster. Users can select a node from a dropdown in the **Advanced Debug** page of the DB Console UI to route their UI to that node. Manually initiated requests can either add a `remote_node_id` query param to their request or set a `remote_node_id` HTTP cookie in order to manage the routing of their request. [#76694][#76694] -- Add long loading messages to **SQL Activity** pages. [#77008][#77008] -- Removed stray parenthesis at the end of the duration time for a succeeded job. It had been accidentally introduced to unreleased master and a 21.2 backport. Release justification: Category 2, UI bug fix [#77444][#77444] - -

      Bug fixes

      - -- Fixed a bug which caused the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) to omit join filters in rare cases when reordering joins, which could result in incorrect query results. This bug was present since v20.2. [#76619][#76619] -- Fixed a bug where certain `crdb_internal` tables could return incorrect information due to cached table descriptor information. [#76520][#76520] -- Fixed a bug where CockroachDB could incorrectly not return a row from a table with multiple column families when that row contains a `NULL` value when a composite type ([`FLOAT`](https://www.cockroachlabs.com/docs/v21.2/float), [`DECIMAL`](https://www.cockroachlabs.com/docs/v21.2/decimal), [`COLLATED STRING`](https://www.cockroachlabs.com/docs/v21.2/collate), or an array of these types) is included in the `PRIMARY KEY`. [#76636][#76636] -- Fixed a bug where a [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore) job could hang if it encountered an error when ingesting restored data. [#76509][#76509] -- Fixed a race condition that in rare circumstances could cause a node to panic with `unexpected Stopped processor` during shutdown. [#76827][#76827] -- There is now a 1 hour timeout when sending [Raft](https://www.cockroachlabs.com/docs/v21.2/architecture/replication-layer#raft) snapshots, to avoid stalled snapshot transfers preventing Raft log truncation and growing the Raft log very large. This is configurable via the `COCKROACH_RAFT_SEND_SNAPSHOT_TIMEOUT` environment variable. [#76829][#76829] -- Fixed an error that could sometimes occur when sorting the output of the [`SHOW CREATE ALL TABLES`](https://www.cockroachlabs.com/docs/v21.2/show-create) statement. [#76698][#76698] -- Fixed a bug where [`CASE` expressions](https://www.cockroachlabs.com/docs/v21.2/scalar-expressions#conditional-expressions) with branches that result in types that cannot be cast to a common type caused internal errors. They now result in a user-facing error. [#76616][#76616] -- Error messages produced during import are now truncated. Previously, [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import) could potentially generate large error messages that could not be persisted to the jobs table, resulting in a failed import never entering the failed state and instead retrying repeatedly. [#76980][#76980] -- Fixed a bug that could corrupt indexes containing [virtual columns](https://www.cockroachlabs.com/docs/v21.2/computed-columns) or [expressions](https://www.cockroachlabs.com/docs/v21.2/expression-indexes). The bug only occurred when the index's table had a foreign key reference to another table with an `ON DELETE CASCADE` action, and a row was deleted in the referenced table. This bug was present since virtual columns were added in version v21.1.0. [#77053][#77053] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.2/changefeed-sinks) retry instead of fail on RPC send failure. [#77069][#77069] -- Fixed a bug that caused the **Open Transactions** chart on the [**Metrics**](https://www.cockroachlabs.com/docs/v21.2/ui-overview-dashboard) page to constantly increase for empty transactions. [#77236][#77236] -- Fixed a bug that could interfere with a system table migration. [#77309][#77309] -- The content type header for the HTTP log sink is now set to `application/json` if the format of the log output is `JSON`. [#77341][#77341] -- Fixed a bug where draining nodes in a cluster without shutting them down could stall foreground traffic in the cluster. [#77490][#77490] - -

      Performance improvements

      - -- Fixed a bug in the histogram estimation code that could cause the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) to think a scan of a multi-column index would produce 0 rows, when in fact it would produce many rows. This could cause the optimizer to choose a suboptimal plan. It is now less likely for the optimizer to choose a suboptimal plan when multiple multi-column indexes are available. [#76555][#76555] -- The accuracy of histogram calculations for `BYTES` types has been improved. As a result, the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) should generate more efficient query plans in some cases. [#76796][#76796] - -

      Contributors

      - -This release includes 34 merged PRs by 24 authors. - -[#76509]: https://github.com/cockroachdb/cockroach/pull/76509 -[#76520]: https://github.com/cockroachdb/cockroach/pull/76520 -[#76555]: https://github.com/cockroachdb/cockroach/pull/76555 -[#76616]: https://github.com/cockroachdb/cockroach/pull/76616 -[#76619]: https://github.com/cockroachdb/cockroach/pull/76619 -[#76636]: https://github.com/cockroachdb/cockroach/pull/76636 -[#76688]: https://github.com/cockroachdb/cockroach/pull/76688 -[#76694]: https://github.com/cockroachdb/cockroach/pull/76694 -[#76698]: https://github.com/cockroachdb/cockroach/pull/76698 -[#76796]: https://github.com/cockroachdb/cockroach/pull/76796 -[#76816]: https://github.com/cockroachdb/cockroach/pull/76816 -[#76827]: https://github.com/cockroachdb/cockroach/pull/76827 -[#76829]: https://github.com/cockroachdb/cockroach/pull/76829 -[#76980]: https://github.com/cockroachdb/cockroach/pull/76980 -[#76987]: https://github.com/cockroachdb/cockroach/pull/76987 -[#77008]: https://github.com/cockroachdb/cockroach/pull/77008 -[#77053]: https://github.com/cockroachdb/cockroach/pull/77053 -[#77069]: https://github.com/cockroachdb/cockroach/pull/77069 -[#77072]: https://github.com/cockroachdb/cockroach/pull/77072 -[#77142]: https://github.com/cockroachdb/cockroach/pull/77142 -[#77236]: https://github.com/cockroachdb/cockroach/pull/77236 -[#77304]: https://github.com/cockroachdb/cockroach/pull/77304 -[#77309]: https://github.com/cockroachdb/cockroach/pull/77309 -[#77313]: https://github.com/cockroachdb/cockroach/pull/77313 -[#77341]: https://github.com/cockroachdb/cockroach/pull/77341 -[#77444]: https://github.com/cockroachdb/cockroach/pull/77444 -[#77490]: https://github.com/cockroachdb/cockroach/pull/77490 -[efdf6a61e]: https://github.com/cockroachdb/cockroach/commit/efdf6a61e diff --git a/src/current/_includes/releases/v21.2/v21.2.8.md b/src/current/_includes/releases/v21.2/v21.2.8.md deleted file mode 100644 index cb8a48b7080..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.8.md +++ /dev/null @@ -1,101 +0,0 @@ -## v21.2.8 - -Release Date: April 4, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      Security updates

      - -- Users can enable HSTS headers to be set on all HTTP requests, which force browsers to upgrade to HTTPS without a redirect. This is controlled by setting the `server.hsts.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings), which is `false` by default, to `true`. [#77845][#77845] -- Added a new flag `--external-io-enable-non-admin-implicit-access` that can remove the admin-only restriction on interacting with arbitrary network endpoints and using implicit authentication in operations such as [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup), [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import), or [`EXPORT`](https://www.cockroachlabs.com/docs/v21.2/export). [#78599][#78599] - -

      Enterprise edition changes

      - -- Changefeeds running on tables with a low value for the `gc.ttlseconds` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) will function more reliably due to protected timestamps being maintained for the changefeed targets at the resolved timestamp of the changefeed. The frequency at which the protected timestamp is updated to the resolved timestamp can be configured through the `changefeed.protect_timestamp_interval` cluster setting. If the changefeed lags too far behind such that storage of old data becomes an issue, cancelling the changefeed will release the protected timestamps and allow garbage collection to resume. If `protect_data_from_gc_on_pause` is unset, pausing the changefeed will release the existing protected timestamp record. [#77589][#77589] -- Added the `changefeed.backfill_pending_ranges` prometheus metric to track the ongoing backfill progress of a changefeed. [#77383][#77383] -- Changefeeds now record message size histograms. [#77932][#77932] -- The number of concurrent catchup scan requests issued by a rangefeed client is now limited. [#77932][#77932] -- Removed the expensive and unnecessary `schedules.round.schedules-ready-to-run` and `schedules.round.num-jobs-running` metrics from job schedulers. [#78583][#78583] - -

      SQL language changes

      - -- Added a `sql.auth.resolve_membership_single_scan.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings), which changes the query for an internal role membership cache. Previously the code would recursively look up each role in the membership hierarchy, leading to multiple queries. With the setting on, it uses a single query. This setting is `false` by default. [#77631][#77631] -- When users run [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v21.2/show-backup) on an encrypted incremental backup, they must now set the `encyrption_info_dir` directory to the full backup directory in order for `SHOW BACKUP` to work. [#78141][#78141] -- The [stats compaction](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer#table-statistics) scheduled job no longer causes intent buildup. [#78583][#78583] -- Implemented a scheduled logger used to capture index usage statistics to the telemetry logging channel. [#78522][#78522] - -

      Operational changes

      - -- The setting `kv.bulk_io_write.min_capacity_remaining_fraction` can now be set to cause bulk ingest operations like [`IMPORT`](https://www.cockroachlabs.com/docs/v21.2/import), [`RESTORE`](https://www.cockroachlabs.com/docs/v21.2/restore), or [`CREATE INDEX`](https://www.cockroachlabs.com/docs/v21.2/create-index) to fail rather than write to a node that is running out of disk space. [#78575][#78575] -- Improved jobs system resilience to scheduled jobs that may lock up the scheduled jobs table for long periods of time. Each schedule now has a limited amount of time to complete its execution. The timeout is controlled via the `jobs.scheduler.schedule_execution.timeout` [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings). [#77620][#77620] - - -

      Command-line changes

      - -- The `cockroach debug tsdump` command now allows viewing timeseries data even in cases of node failure by allowing users to rerun the command with the import filename set to `"-"`. [#77976][#77976] -- Fixed a bug where running [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.2/cockroach-demo) with the `--global` flag would not simulate latencies correctly when combined with the `--insecure` flag. [#78170][#78170] - -

      DB Console changes

      - -- Added a **Hot Ranges** page and linked to it in the sidebar. [#77594][#77594] -- The `_status/nodes` endpoint is now available to all users with the `VIEWACTIVITY` role option, not just `admin` users. In the DB Console, the **Nodes Overview** and **Node Reports** pages will now display unredacted information containing node hostnames and IP addresses for all users with the `VIEWACTIVITY` role option. [#78275][#78275] -- Fixed a bug where a node in the `UNAVAILABLE` state would not have latency defined and caused the network page to crash. [#78627][#78627] - -

      Bug fixes

      - -- Fixed a bug that caused errors when attempting to create table statistics (with [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v21.2/create-statistics) or [`ANALYZE`](https://www.cockroachlabs.com/docs/v21.2/explain-analyze)) for a table containing an index that indexed only virtual [`computed columns`](https://www.cockroachlabs.com/docs/v21.2/computed-columns). This bug has been present since v21.1.0. [#77565][#77565] -- Fixed a bug when adding a [`hash-sharded index`](https://www.cockroachlabs.com/docs/v21.2/hash-sharded-indexes) to a table watched by a changefeed. [#77739][#77739] -- Fixed successive schema change backfills from skipping spans that were checkpointed by an initial backfill that was restarted. [#77829][#77829] -- Attempting to run concurrent profiles now works up to a concurrency limit of two. This will remove the occurrence of `profile id not found` errors while running up to two profiles concurrently. When a profile is not found, the error message has been updated to suggest troubleshooting steps. [#77977][#77977] -- Fixed an optimizer bug that prevented expressions of the form `(NULL::STRING[] <@ ARRAY['x'])` from being folded to `NULL`. This bug was introduced in v21.2.0. [#78039][#78039] -- Added a limit of seven concurrent asynchronous consistency checks per store, with an upper timeout of one hour. This prevents abandoned consistency checks from building up in some circumstances, which could lead to increasing disk usage as they held onto [Pebble](https://www.cockroachlabs.com/docs/v21.2/architecture/storage-layer#pebble) snapshots. [#77611][#77611] -- Fixed a bug where the [**Statement Details**](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page#statement-details-page) page fails to load query plan even after when the plan has been sampled. [#78105][#78105] -- Fixed a memory leak in the Pebble block cache. [#78257][#78257] -- Fixed a bug that caused internal errors when `COALESCE` and `IF` expressions had inner expressions with different types that could not be cast to a common type. [#78342][#78342] -- CockroachDB might now fetch fewer rows when performing lookup and index joins on queries with the [`LIMIT`](https://www.cockroachlabs.com/docs/v21.2/limit-offset#limit) clause. [#78474][#78474] -- A zone config change event now includes the correct details of what was changed instead of incorrectly displaying `undefined`. [#78634][#78634] -- Fixed a bug that prevented a table created on a 22.1 node to be queried on a 21.2 node in a mixed-version cluster. [#78657][#78657] -- Fixed a bug that caused errors when trying to evaluate queries with `NULL` values annotated as a tuple type, such as `NULL:::RECORD`. This bug was present since v19.1. [#78635][#78635] -- Fixed a bug where CockroachDB could lose `INT2VECTOR` and `OIDVECTOR` types of some arrays. [#78630][#78630] -- Fixed a bug that caused the optimizer to generate invalid query plans which could result in incorrect query results. The bug, which has been present since v21.1.0, can appear if all of the following conditions are true: 1) the query contains a semi-join, such as queries in the form: `SELECT * FROM t1 WHERE EXISTS (SELECT * FROM t2 WHERE t1.a = t2.a);`, 2) the inner table has an index containing the equality column, like `t2.a` in the example query, 3) the index contains one or more columns that prefix the equality column, and 4) the prefix columns are `NOT NULL` and are constrained to a set of constant values via a `CHECK` constraint or an `IN` condition in the filter. [#78975][#78975] -- Fixed a bug where [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) could create duplicate entries violating [`UNIQUE`](https://www.cockroachlabs.com/docs/v21.2/unique) constraints in [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview#regional-by-row-tables) and tables utilizing `UNIQUE WITHOUT INDEX` constraints. A new post-`IMPORT` validation step for those tables now fails and rolls back the `IMPORT` in such cases. [#78975][#78975] - -

      Contributors

      - -This release includes 54 merged PRs by 33 authors. - -[#77383]: https://github.com/cockroachdb/cockroach/pull/77383 -[#77551]: https://github.com/cockroachdb/cockroach/pull/77551 -[#77565]: https://github.com/cockroachdb/cockroach/pull/77565 -[#77589]: https://github.com/cockroachdb/cockroach/pull/77589 -[#77594]: https://github.com/cockroachdb/cockroach/pull/77594 -[#77611]: https://github.com/cockroachdb/cockroach/pull/77611 -[#77620]: https://github.com/cockroachdb/cockroach/pull/77620 -[#77631]: https://github.com/cockroachdb/cockroach/pull/77631 -[#77739]: https://github.com/cockroachdb/cockroach/pull/77739 -[#77829]: https://github.com/cockroachdb/cockroach/pull/77829 -[#77845]: https://github.com/cockroachdb/cockroach/pull/77845 -[#77932]: https://github.com/cockroachdb/cockroach/pull/77932 -[#77976]: https://github.com/cockroachdb/cockroach/pull/77976 -[#77977]: https://github.com/cockroachdb/cockroach/pull/77977 -[#78039]: https://github.com/cockroachdb/cockroach/pull/78039 -[#78105]: https://github.com/cockroachdb/cockroach/pull/78105 -[#78141]: https://github.com/cockroachdb/cockroach/pull/78141 -[#78170]: https://github.com/cockroachdb/cockroach/pull/78170 -[#78253]: https://github.com/cockroachdb/cockroach/pull/78253 -[#78257]: https://github.com/cockroachdb/cockroach/pull/78257 -[#78275]: https://github.com/cockroachdb/cockroach/pull/78275 -[#78342]: https://github.com/cockroachdb/cockroach/pull/78342 -[#78474]: https://github.com/cockroachdb/cockroach/pull/78474 -[#78522]: https://github.com/cockroachdb/cockroach/pull/78522 -[#78575]: https://github.com/cockroachdb/cockroach/pull/78575 -[#78583]: https://github.com/cockroachdb/cockroach/pull/78583 -[#78599]: https://github.com/cockroachdb/cockroach/pull/78599 -[#78627]: https://github.com/cockroachdb/cockroach/pull/78627 -[#78630]: https://github.com/cockroachdb/cockroach/pull/78630 -[#78634]: https://github.com/cockroachdb/cockroach/pull/78634 -[#78635]: https://github.com/cockroachdb/cockroach/pull/78635 -[#78657]: https://github.com/cockroachdb/cockroach/pull/78657 -[#78975]: https://github.com/cockroachdb/cockroach/pull/78975 -[09fa57587]: https://github.com/cockroachdb/cockroach/commit/09fa57587 -[5c37418e6]: https://github.com/cockroachdb/cockroach/commit/5c37418e6 diff --git a/src/current/_includes/releases/v21.2/v21.2.9.md b/src/current/_includes/releases/v21.2/v21.2.9.md deleted file mode 100644 index e754747ba59..00000000000 --- a/src/current/_includes/releases/v21.2/v21.2.9.md +++ /dev/null @@ -1,69 +0,0 @@ -## v21.2.9 - -Release Date: April 13, 2022 - -{% include releases/release-downloads-docker-image.md release=include.release %} - -

      SQL language changes

      - -- [`SHOW BACKUP`](https://www.cockroachlabs.com/docs/v21.2/show-backup) now reports accurate row and byte size counts on backups created by a tenant. [#79347][#79347] -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.2/explain-analyze) now reports memory and disk usage for lookup joins. [#79353][#79353] - -

      Operational changes

      - -- Added a new metric that charts the number of bytes received via [snapshot](https://www.cockroachlabs.com/docs/v21.2/ui-replication-dashboard#snapshots) on any given store. [#79056][#79056] - -

      DB Console changes

      - -- Minor styling changes on the DB Console's [Hot Ranges page](https://www.cockroachlabs.com/docs/v21.2/ui-hot-ranges-page) to follow the same style as other pages. [#79498][#79498] - -

      Bug fixes

      - -- Fixed [`num_runs`](https://www.cockroachlabs.com/docs/v21.2/show-jobs) being incremented twice for certain jobs upon being started. [#79051][#79051] -- Index usage stats are now properly captured for index joins. [#79240][#79240] -- [`ALTER TABLE ADD COLUMN`](https://www.cockroachlabs.com/docs/v21.2/alter-table) and [`ALTER TABLE DROP COLUMN`](https://www.cockroachlabs.com/docs/v21.2/alter-table) are now both subject to [admission control](https://www.cockroachlabs.com/docs/v21.2/architecture/admission-control), which will prevent these operations from overloading the storage engine. [#79211][#79211] -- Fixed a performance regression released in v21.1.7 that reverted [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup) to its pre-v21.2.5 memory usage and runtime during planning of larger backups with many prior incremental layers. [#79267][#79267] -- Fixed a bug where [`SHOW SCHEMAS FROM `](https://www.cockroachlabs.com/docs/v21.2/show-schemas) would not include user-defined schemas. [#79307][#79307] -- Previously, [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.2/import-into) could create duplicate entries for [`UNIQUE`](https://www.cockroachlabs.com/docs/v21.2/unique) constraints in [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.2/multiregion-overview#regional-by-row-tables) tables and tables utilizing `UNIQUE WITHOUT INDEX` constraints. This fix introduces a new validation step after the `IMPORT INTO` for those tables, which will cause the `IMPORT INTO` to fail and be rolled back in such cases. [#79326][#79326] -- Fixed a bug in I/O [admission control](https://www.cockroachlabs.com/docs/v21.2/architecture/admission-control) that could result in admission control failing to rate-limit when traffic was stalled such that no work was admitted, despite the store's being in an unhealthy state. [#79342][#79342] -- Previously, CockroachDB could run into `memory budget exceeded` errors when performing [lookup joins](https://www.cockroachlabs.com/docs/v21.2/joins#lookup-joins) under certain memory conditions. This fix causes such operations to now more reliably [spill to disk](https://www.cockroachlabs.com/docs/v21.2/vectorized-execution#disk-spilling-operations), which should reduce these errors for larger joins. [#79353][#79353] -- [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup) read requests are now sent with lower [admission control](https://www.cockroachlabs.com/docs/v21.2/architecture/admission-control) priority than normal traffic.[#79367][#79367] -- Previously, [`LIMIT`](https://www.cockroachlabs.com/docs/v21.2/limit-offset) queries with an [`ORDER BY`](https://www.cockroachlabs.com/docs/v21.2/order-by) clause which scan the index of virtual system tables, such as `pg_type`, could return incorrect results. This is corrected by teaching the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) that `LIMIT` operations cannot be pushed into ordered scans of virtual indexes. [#79464][#79464] -- Fixed a bug that caused the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) to generate query plans with logically incorrect lookup joins. The bug, present since v21.2.0, can only occur in queries with an inner join, e.g., `t1 JOIN t2`, if all of the following are true: - - The join contains an equality condition between columns of both tables, e.g., `t1.a = t2.a`. - - A query filter or [`CHECK`](https://www.cockroachlabs.com/docs/v21.2/check) constraint constrains a column to a set of specific values, e.g., `t2.b IN (1, 2, 3)`. In the case of a `CHECK` constraint, the column must be `NOT NULL`. - - A query filter or `CHECK` constraint constrains a column to a range, e.g., `t2.c > 0`. In the case of a `CHECK` constraint, the column must be `NOT NULL`. - - An index contains a column from each of the criteria above, e.g., `INDEX t2(a, b, c)`. [#79505][#79505] -- Fixed a bug that caused the [optimizer](https://www.cockroachlabs.com/docs/v21.2/cost-based-optimizer) to generate invalid query plans that could result in incorrect query results. The bug, present since version v21.1.0, can appear if all of the following conditions are true: - - The query contains a semi-join, e.g., with the format `SELECT * FROM a WHERE EXISTS (SELECT * FROM b WHERE a.a @> b.b)`. - - The inner table has a multi-column [inverted index](https://www.cockroachlabs.com/docs/v21.2/inverted-indexes) containing the inverted column in the filter. - - The index prefix columns are constrained to a set of values via the filter or a [`CHECK`](https://www.cockroachlabs.com/docs/v21.2/check) constraint, e.g., with an `IN` operator. In the case of a `CHECK` constraint, the column is `NOT NULL`. [#79505][#79505] -- Fixed a bug preventing DB Console from properly loading static assets, causing the interface to appear blank. [#79662][#79662] - -

      Performance improvements

      - -- The DB Console no longer downloads unused JS files on load. [#78668][#78668] -- The DB Console now supports caching of files in the web browser. [#79394][#79394] - -

      Contributors

      - -This release includes 31 merged PRs by 19 authors. - -[#78668]: https://github.com/cockroachdb/cockroach/pull/78668 -[#79051]: https://github.com/cockroachdb/cockroach/pull/79051 -[#79056]: https://github.com/cockroachdb/cockroach/pull/79056 -[#79211]: https://github.com/cockroachdb/cockroach/pull/79211 -[#79240]: https://github.com/cockroachdb/cockroach/pull/79240 -[#79267]: https://github.com/cockroachdb/cockroach/pull/79267 -[#79307]: https://github.com/cockroachdb/cockroach/pull/79307 -[#79326]: https://github.com/cockroachdb/cockroach/pull/79326 -[#79342]: https://github.com/cockroachdb/cockroach/pull/79342 -[#79347]: https://github.com/cockroachdb/cockroach/pull/79347 -[#79353]: https://github.com/cockroachdb/cockroach/pull/79353 -[#79367]: https://github.com/cockroachdb/cockroach/pull/79367 -[#79394]: https://github.com/cockroachdb/cockroach/pull/79394 -[#79464]: https://github.com/cockroachdb/cockroach/pull/79464 -[#79498]: https://github.com/cockroachdb/cockroach/pull/79498 -[#79505]: https://github.com/cockroachdb/cockroach/pull/79505 -[#79662]: https://github.com/cockroachdb/cockroach/pull/79662 -[a9c87a179]: https://github.com/cockroachdb/cockroach/commit/a9c87a179 diff --git a/src/current/_includes/releases/v23.1/v23.1.9.md b/src/current/_includes/releases/v23.1/v23.1.9.md index bc1d6c858e3..0c93cdc147f 100644 --- a/src/current/_includes/releases/v23.1/v23.1.9.md +++ b/src/current/_includes/releases/v23.1/v23.1.9.md @@ -85,7 +85,7 @@ Release Date: September 7, 2023 - Fixed a bug where, in rare circumstances, a [replication](https://cockroachlabs.com/docs/v23.1/architecture/replication-layer) could get stuck when proposed near lease or leadership changes, especially under overload, and the [replica circuit breakers]([../v23.1](https://cockroachlabs.com/docs/v23.1/architecture/replication-layer#per-replica-circuit-breakers) could trip. A previous attempt to fix this issue has been reverted in favor of this fix. [#106515][#106515] - Fixed a bug in the SQL syntax for [`CREATE TABLE AS`](../v23.1/create-table-as.html) [schema change](../v23.1/online-schema-changes.html) [job](../v23.1/show-jobs.html) description. [#107404][#107404] - Fixed an internal error in [`UPDATE`](../v23.1/update.html), [`UPSERT`](../v23.1/upsert.html), [`INSERT`](../v23.1/insert.html), or [`DELETE`](../v23.1/delete.html) statements run concurrently with [`ALTER TABLE ... ADD COLUMN`](../v23.1/alter-table.html#add-column) of a [virtual computed column](../v23.1/computed-columns.html#virtual-computed-columns) on the same table. [#107403][#107403] -- Fixed a bug that caused internal errors when using [user-defined types](../v23.1/create-type.html) in [views](../v23.1/views.html) and [user-defined functions](../v23.1/user-defined-functions.html) that have [subqueries](../v23.1/subqueries.html). This bug was present when using views since version [v21.2](../releases/v21.2.html). It was present when using user-defined functions since [v23.1](../releases/v23.1.html). [#106955][#106955] +- Fixed a bug that caused internal errors when using [user-defined types](../v23.1/create-type.html) in [views](../v23.1/views.html) and [user-defined functions](../v23.1/user-defined-functions.html) that have [subqueries](../v23.1/subqueries.html). This bug was present when using views since version v21.2. It was present when using user-defined functions since [v23.1](../releases/v23.1.html). [#106955][#106955] - The timeout duration when loading the [**Hot Ranges** page](../v23.1/ui-hot-ranges-page.html) has been increased to 30 minutes. [#107497][#107497] - Fixed the SQL syntax for [`CREATE MATERIALIZED VIEW AS`](../v23.1/views.html#materialized-views) [schema change](../v23.1/online-schema-changes.html) [job](../v23.1/show-jobs.html) descriptions. [#107471][#107471] - Reduced [contention](../v23.1/performance-best-practices-overview.html#transaction-contention) on the `system.statement_statistics` table which has caused the [SQL statistics](../v23.1/cost-based-optimizer.html#table-statistics) compaction [job](../v23.1/show-jobs.html) to fail. [#107573][#107573] diff --git a/src/current/_includes/sidebar-data-v21.2.json b/src/current/_includes/sidebar-data-v21.2.json deleted file mode 100644 index 30854a035fc..00000000000 --- a/src/current/_includes/sidebar-data-v21.2.json +++ /dev/null @@ -1,18 +0,0 @@ -[ - { - "title": "Docs Home", - "is_top_level": true, - "urls": [ - "/" - ] - }, - {% include_cached v21.2/sidebar-data/get-started.json %}, - {% include_cached v21.2/sidebar-data/develop.json %}, - {% include_cached v21.2/sidebar-data/deploy.json %}, - {% include_cached v21.2/sidebar-data/manage.json %}, - {% include_cached v21.2/sidebar-data/migrate.json %}, - {% include_cached v21.2/sidebar-data/stream.json %}, - {% include_cached v21.2/sidebar-data/reference.json %}, - {% include_cached v21.2/sidebar-data/releases.json %}, - {% include_cached sidebar-data-cockroach-university.json %} -] diff --git a/src/current/_includes/v21.2/app/before-you-begin.md b/src/current/_includes/v21.2/app/before-you-begin.md deleted file mode 100644 index b271d6ff85c..00000000000 --- a/src/current/_includes/v21.2/app/before-you-begin.md +++ /dev/null @@ -1,12 +0,0 @@ -1. [Install CockroachDB](install-cockroachdb.html). -2. Start up a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster. -3. Choose the instructions that correspond to whether your cluster is secure or insecure: - -
      - - -
      - -
      -{% include {{ page.version.version }}/prod-deployment/insecure-flag.md %} -
      \ No newline at end of file diff --git a/src/current/_includes/v21.2/app/cc-free-tier-params.md b/src/current/_includes/v21.2/app/cc-free-tier-params.md deleted file mode 100644 index f8a196cdd8e..00000000000 --- a/src/current/_includes/v21.2/app/cc-free-tier-params.md +++ /dev/null @@ -1,10 +0,0 @@ -Where: - -- `{username}` and `{password}` specify the SQL username and password that you created earlier. -- `{globalhost}` is the name of the CockroachDB {{ site.data.products.cloud }} free tier host (e.g., `free-tier.gcp-us-central1.cockroachlabs.cloud`). -- `{path to the CA certificate}` is the path to the `cc-ca.crt` file that you downloaded from the CockroachDB {{ site.data.products.cloud }} Console. -- `{cluster_name}` is the name of your cluster. - -{{site.data.alerts.callout_info}} -If you are using the connection string that you [copied from the **Connection info** modal](#set-up-your-cluster-connection), your username, password, hostname, and cluster name will be pre-populated. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/app/create-a-database.md b/src/current/_includes/v21.2/app/create-a-database.md deleted file mode 100644 index 468eb93a57f..00000000000 --- a/src/current/_includes/v21.2/app/create-a-database.md +++ /dev/null @@ -1,54 +0,0 @@ -
      - -1. In the SQL shell, create the `bank` database that your application will use: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - -1. Create a SQL user for your app: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER WITH PASSWORD ; - ~~~ - - Take note of the username and password. You will use it in your application code later. - -1. Give the user the necessary permissions: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT ALL ON DATABASE bank TO ; - ~~~ - -
      - -
      - -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Start the [built-in SQL shell](cockroach-sql.html) using the connection string you got from the CockroachDB {{ site.data.products.cloud }} Console: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --url='' - ~~~ - -1. In the SQL shell, create the `bank` database that your application will use: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - -1. Exit the SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - - -
      \ No newline at end of file diff --git a/src/current/_includes/v21.2/app/create-maxroach-user-and-bank-database.md b/src/current/_includes/v21.2/app/create-maxroach-user-and-bank-database.md deleted file mode 100644 index 1e259b96012..00000000000 --- a/src/current/_includes/v21.2/app/create-maxroach-user-and-bank-database.md +++ /dev/null @@ -1,32 +0,0 @@ -Start the [built-in SQL shell](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `maxroach` user the necessary permissions: - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v21.2/app/for-a-complete-example-go.md b/src/current/_includes/v21.2/app/for-a-complete-example-go.md deleted file mode 100644 index 64803f686a9..00000000000 --- a/src/current/_includes/v21.2/app/for-a-complete-example-go.md +++ /dev/null @@ -1,4 +0,0 @@ -For complete examples, see: - -- [Build a Go App with CockroachDB](build-a-go-app-with-cockroachdb.html) (pgx) -- [Build a Go App with CockroachDB and GORM](build-a-go-app-with-cockroachdb.html) diff --git a/src/current/_includes/v21.2/app/for-a-complete-example-java.md b/src/current/_includes/v21.2/app/for-a-complete-example-java.md deleted file mode 100644 index b4c63135ae0..00000000000 --- a/src/current/_includes/v21.2/app/for-a-complete-example-java.md +++ /dev/null @@ -1,4 +0,0 @@ -For complete examples, see: - -- [Build a Java App with CockroachDB](build-a-java-app-with-cockroachdb.html) (JDBC) -- [Build a Java App with CockroachDB and Hibernate](build-a-java-app-with-cockroachdb-hibernate.html) diff --git a/src/current/_includes/v21.2/app/for-a-complete-example-python.md b/src/current/_includes/v21.2/app/for-a-complete-example-python.md deleted file mode 100644 index c647ce75df2..00000000000 --- a/src/current/_includes/v21.2/app/for-a-complete-example-python.md +++ /dev/null @@ -1,5 +0,0 @@ -For complete examples, see: - -- [Build a Python App with CockroachDB](build-a-python-app-with-cockroachdb.html) (psycopg2) -- [Build a Python App with CockroachDB and SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html) -- [Build a Python App with CockroachDB and Django](build-a-python-app-with-cockroachdb-django.html) diff --git a/src/current/_includes/v21.2/app/hibernate-dialects-note.md b/src/current/_includes/v21.2/app/hibernate-dialects-note.md deleted file mode 100644 index 85f217abd3c..00000000000 --- a/src/current/_includes/v21.2/app/hibernate-dialects-note.md +++ /dev/null @@ -1,5 +0,0 @@ -Versions of the Hibernate CockroachDB dialect correspond to the version of CockroachDB installed on your machine. For example, `org.hibernate.dialect.CockroachDB201Dialect` corresponds to CockroachDB v20.1 and later, and `org.hibernate.dialect.CockroachDB192Dialect` corresponds to CockroachDB v19.2 and later. - -All dialect versions are forward-compatible (e.g., CockroachDB v20.1 is compatible with `CockroachDB192Dialect`), as long as your application is not affected by any backward-incompatible changes listed in your CockroachDB version's [release notes](../releases/index.html). In the event of a CockroachDB version upgrade, using a previous version of the CockroachDB dialect will not break an application, but, to enable all features available in your version of CockroachDB, we recommend keeping the dialect version in sync with the installed version of CockroachDB. - -Not all versions of CockroachDB have a corresponding dialect yet. Use the dialect number that is closest to your installed version of CockroachDB. For example, use `CockroachDB201Dialect` when using CockroachDB v21.1 and later. \ No newline at end of file diff --git a/src/current/_includes/v21.2/app/insecure/create-maxroach-user-and-bank-database.md b/src/current/_includes/v21.2/app/insecure/create-maxroach-user-and-bank-database.md deleted file mode 100644 index 0fff36e7545..00000000000 --- a/src/current/_includes/v21.2/app/insecure/create-maxroach-user-and-bank-database.md +++ /dev/null @@ -1,32 +0,0 @@ -Start the [built-in SQL shell](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `maxroach` user the necessary permissions: - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v21.2/app/insecure/jooq-basic-sample/Sample.java b/src/current/_includes/v21.2/app/insecure/jooq-basic-sample/Sample.java deleted file mode 100644 index d1a54a8ddd2..00000000000 --- a/src/current/_includes/v21.2/app/insecure/jooq-basic-sample/Sample.java +++ /dev/null @@ -1,215 +0,0 @@ -package com.cockroachlabs; - -import com.cockroachlabs.example.jooq.db.Tables; -import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord; -import org.jooq.DSLContext; -import org.jooq.SQLDialect; -import org.jooq.Source; -import org.jooq.conf.RenderQuotedNames; -import org.jooq.conf.Settings; -import org.jooq.exception.DataAccessException; -import org.jooq.impl.DSL; - -import java.io.InputStream; -import java.sql.Connection; -import java.sql.DriverManager; -import java.sql.SQLException; -import java.util.*; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicLong; -import java.util.function.Function; - -import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS; - -public class Sample { - - private static final Random RAND = new Random(); - private static final boolean FORCE_RETRY = false; - private static final String RETRY_SQL_STATE = "40001"; - private static final int MAX_ATTEMPT_COUNT = 6; - - private static Function addAccounts() { - return ctx -> { - long rv = 0; - - ctx.delete(ACCOUNTS).execute(); - ctx.batchInsert( - new AccountsRecord(1L, 1000L), - new AccountsRecord(2L, 250L), - new AccountsRecord(3L, 314159L) - ).execute(); - - rv = 1; - System.out.printf("APP: addAccounts() --> %d\n", rv); - return rv; - }; - } - - private static Function transferFunds(long fromId, long toId, long amount) { - return ctx -> { - long rv = 0; - - AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId)); - AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId)); - - if (!(amount > fromAccount.getBalance())) { - fromAccount.setBalance(fromAccount.getBalance() - amount); - toAccount.setBalance(toAccount.getBalance() + amount); - - ctx.batchUpdate(fromAccount, toAccount).execute(); - rv = amount; - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv); - } - - return rv; - }; - } - - // Test our retry handling logic if FORCE_RETRY is true. This - // method is only used to test the retry logic. It is not - // intended for production code. - private static Function forceRetryLogic() { - return ctx -> { - long rv = -1; - try { - System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n"); - ctx.execute("SELECT crdb_internal.force_retry('1s')"); - } catch (DataAccessException e) { - System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n"); - throw e; - } - return rv; - }; - } - - private static Function getAccountBalance(long id) { - return ctx -> { - AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id)); - long balance = account.getBalance(); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance); - return balance; - }; - } - - // Run SQL code in a way that automatically handles the - // transaction retry logic so we do not have to duplicate it in - // various places. - private static long runTransaction(DSLContext session, Function fn) { - AtomicLong rv = new AtomicLong(0L); - AtomicInteger attemptCount = new AtomicInteger(0); - - while (attemptCount.get() < MAX_ATTEMPT_COUNT) { - attemptCount.incrementAndGet(); - - if (attemptCount.get() > 1) { - System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get()); - } - - if (session.connectionResult(connection -> { - connection.setAutoCommit(false); - System.out.printf("APP: BEGIN;\n"); - - if (attemptCount.get() == MAX_ATTEMPT_COUNT) { - String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT); - throw new RuntimeException(err); - } - - // This block is only used to test the retry logic. - // It is not necessary in production code. See also - // the method 'testRetryLogic()'. - if (FORCE_RETRY) { - session.fetch("SELECT now()"); - } - - try { - rv.set(fn.apply(session)); - if (rv.get() != -1) { - connection.commit(); - System.out.printf("APP: COMMIT;\n"); - return true; - } - } catch (DataAccessException | SQLException e) { - String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState(); - - if (RETRY_SQL_STATE.equals(sqlState)) { - // Since this is a transaction retry error, we - // roll back the transaction and sleep a little - // before trying again. Each time through the - // loop we sleep for a little longer than the last - // time (A.K.A. exponential backoff). - System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get()); - System.out.printf("APP: ROLLBACK;\n"); - connection.rollback(); - int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100); - System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis); - try { - Thread.sleep(sleepMillis); - } catch (InterruptedException ignored) { - // no-op - } - rv.set(-1L); - } else { - throw e; - } - } - - return false; - })) { - break; - } - } - - return rv.get(); - } - - public static void main(String[] args) throws Exception { - try (Connection connection = DriverManager.getConnection( - "jdbc:postgresql://localhost:26257/bank?sslmode=disable", - "maxroach", - "" - )) { - DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings() - .withExecuteLogging(true) - .withRenderQuotedNames(RenderQuotedNames.NEVER)); - - // Initialise database with db.sql script - try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) { - ctx.parser().parse(Source.of(in).readString()).executeBatch(); - } - - long fromAccountId = 1; - long toAccountId = 2; - long transferAmount = 100; - - if (FORCE_RETRY) { - System.out.printf("APP: About to test retry logic in 'runTransaction'\n"); - runTransaction(ctx, forceRetryLogic()); - } else { - - runTransaction(ctx, addAccounts()); - long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalance = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalance != -1 && toBalance != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance); - } - - // Transfer $100 from account 1 to account 2 - long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount)); - if (transferResult != -1) { - // Success! - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult); - - long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalanceAfter != -1 && toBalanceAfter != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter); - } - } - } - } - } -} diff --git a/src/current/_includes/v21.2/app/insecure/jooq-basic-sample/jooq-basic-sample.zip b/src/current/_includes/v21.2/app/insecure/jooq-basic-sample/jooq-basic-sample.zip deleted file mode 100644 index f11f86b8f43..00000000000 Binary files a/src/current/_includes/v21.2/app/insecure/jooq-basic-sample/jooq-basic-sample.zip and /dev/null differ diff --git a/src/current/_includes/v21.2/app/insecure/upperdb-basic-sample/main.go b/src/current/_includes/v21.2/app/insecure/upperdb-basic-sample/main.go deleted file mode 100644 index 5c855356d7e..00000000000 --- a/src/current/_includes/v21.2/app/insecure/upperdb-basic-sample/main.go +++ /dev/null @@ -1,185 +0,0 @@ -package main - -import ( - "fmt" - "log" - "time" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/adapter/cockroachdb" -) - -// The settings variable stores connection details. -var settings = cockroachdb.ConnectionURL{ - Host: "localhost", - Database: "bank", - User: "maxroach", - Options: map[string]string{ - // Insecure node. - "sslmode": "disable", - }, -} - -// Accounts is a handy way to represent a collection. -func Accounts(sess db.Session) db.Store { - return sess.Collection("accounts") -} - -// Account is used to represent a single record in the "accounts" table. -type Account struct { - ID uint64 `db:"id,omitempty"` - Balance int64 `db:"balance"` -} - -// Collection is required in order to create a relation between the Account -// struct and the "accounts" table. -func (a *Account) Store(sess db.Session) db.Store { - return Accounts(sess) -} - -// createTables creates all the tables that are neccessary to run this example. -func createTables(sess db.Session) error { - _, err := sess.SQL().Exec(` - CREATE TABLE IF NOT EXISTS accounts ( - ID SERIAL PRIMARY KEY, - balance INT - ) - `) - if err != nil { - return err - } - return nil -} - -// crdbForceRetry can be used to simulate a transaction error and -// demonstrate upper/db's ability to retry the transaction automatically. -// -// By default, upper/db will retry the transaction five times, if you want -// to modify this number use: sess.SetMaxTransactionRetries(n). -// -// This is only used for demonstration purposes and not intended -// for production code. -func crdbForceRetry(sess db.Session) error { - var err error - - // The first statement in a transaction can be retried transparently on the - // server, so we need to add a placeholder statement so that our - // force_retry() statement isn't the first one. - _, err = sess.SQL().Exec(`SELECT 1`) - if err != nil { - return err - } - - // If force_retry is called during the specified interval from the beginning - // of the transaction it returns a retryable error. If not, 0 is returned - // instead of an error. - _, err = sess.SQL().Exec(`SELECT crdb_internal.force_retry('1s'::INTERVAL)`) - if err != nil { - return err - } - - return nil -} - -func main() { - // Connect to the local CockroachDB node. - sess, err := cockroachdb.Open(settings) - if err != nil { - log.Fatal("cockroachdb.Open: ", err) - } - defer sess.Close() - - // Adjust this number to fit your specific needs (set to 5, by default) - // sess.SetMaxTransactionRetries(10) - - // Create the "accounts" table - createTables(sess) - - // Delete all the previous items in the "accounts" table. - err = Accounts(sess).Truncate() - if err != nil { - log.Fatal("Truncate: ", err) - } - - // Create a new account with a balance of 1000. - account1 := Account{Balance: 1000} - err = Accounts(sess).InsertReturning(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Create a new account with a balance of 250. - account2 := Account{Balance: 250} - err = Accounts(sess).InsertReturning(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Change the balance of the first account. - account1.Balance = 500 - err = sess.Save(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Change the balance of the second account. - account2.Balance = 999 - err = sess.Save(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Delete the first record. - err = sess.Delete(&account1) - if err != nil { - log.Fatal("Delete: ", err) - } - - startTime := time.Now() - - // Add a couple of new records within a transaction. - err = sess.Tx(func(tx db.Session) error { - var err error - - if err = tx.Save(&Account{Balance: 887}); err != nil { - return err - } - - if time.Now().Sub(startTime) < time.Second*1 { - // Will fail continuously for 2 seconds. - if err = crdbForceRetry(tx); err != nil { - return err - } - } - - if err = tx.Save(&Account{Balance: 342}); err != nil { - return err - } - - return nil - }) - if err != nil { - log.Fatal("Could not commit transaction: ", err) - } - - // Printing records - printRecords(sess) -} - -func printRecords(sess db.Session) { - accounts := []Account{} - err := Accounts(sess).Find().All(&accounts) - if err != nil { - log.Fatal("Find: ", err) - } - log.Printf("Balances:") - for i := range accounts { - fmt.Printf("\taccounts[%d]: %d\n", accounts[i].ID, accounts[i].Balance) - } -} diff --git a/src/current/_includes/v21.2/app/java-tls-note.md b/src/current/_includes/v21.2/app/java-tls-note.md deleted file mode 100644 index a1fd6f61600..00000000000 --- a/src/current/_includes/v21.2/app/java-tls-note.md +++ /dev/null @@ -1,13 +0,0 @@ -{{site.data.alerts.callout_danger}} -CockroachDB supports TLS 1.2 and 1.3, and uses 1.3 by default. - -[A bug in the TLS 1.3 implementation](https://bugs.openjdk.java.net/browse/JDK-8236039) in Java 11 versions lower than 11.0.7 and Java 13 versions lower than 13.0.3 makes the versions incompatible with CockroachDB. - -If an incompatible version is used, the client may throw the following exception: - -`javax.net.ssl.SSLHandshakeException: extension (5) should not be presented in certificate_request` - -For applications running Java 11 or 13, make sure that you have version 11.0.7 or higher, or 13.0.3 or higher. - -If you cannot upgrade to a version higher than 11.0.7 or 13.0.3, you must configure the application to use TLS 1.2. For example, when starting your app, use: `$ java -Djdk.tls.client.protocols=TLSv1.2 appName` -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/app/java-version-note.md b/src/current/_includes/v21.2/app/java-version-note.md deleted file mode 100644 index 3d559314262..00000000000 --- a/src/current/_includes/v21.2/app/java-version-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -We recommend using Java versions 8+ with CockroachDB. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/app/jooq-basic-sample/Sample.java b/src/current/_includes/v21.2/app/jooq-basic-sample/Sample.java deleted file mode 100644 index fd71726603e..00000000000 --- a/src/current/_includes/v21.2/app/jooq-basic-sample/Sample.java +++ /dev/null @@ -1,215 +0,0 @@ -package com.cockroachlabs; - -import com.cockroachlabs.example.jooq.db.Tables; -import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord; -import org.jooq.DSLContext; -import org.jooq.SQLDialect; -import org.jooq.Source; -import org.jooq.conf.RenderQuotedNames; -import org.jooq.conf.Settings; -import org.jooq.exception.DataAccessException; -import org.jooq.impl.DSL; - -import java.io.InputStream; -import java.sql.Connection; -import java.sql.DriverManager; -import java.sql.SQLException; -import java.util.*; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicLong; -import java.util.function.Function; - -import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS; - -public class Sample { - - private static final Random RAND = new Random(); - private static final boolean FORCE_RETRY = false; - private static final String RETRY_SQL_STATE = "40001"; - private static final int MAX_ATTEMPT_COUNT = 6; - - private static Function addAccounts() { - return ctx -> { - long rv = 0; - - ctx.delete(ACCOUNTS).execute(); - ctx.batchInsert( - new AccountsRecord(1L, 1000L), - new AccountsRecord(2L, 250L), - new AccountsRecord(3L, 314159L) - ).execute(); - - rv = 1; - System.out.printf("APP: addAccounts() --> %d\n", rv); - return rv; - }; - } - - private static Function transferFunds(long fromId, long toId, long amount) { - return ctx -> { - long rv = 0; - - AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId)); - AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId)); - - if (!(amount > fromAccount.getBalance())) { - fromAccount.setBalance(fromAccount.getBalance() - amount); - toAccount.setBalance(toAccount.getBalance() + amount); - - ctx.batchUpdate(fromAccount, toAccount).execute(); - rv = amount; - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv); - } - - return rv; - }; - } - - // Test our retry handling logic if FORCE_RETRY is true. This - // method is only used to test the retry logic. It is not - // intended for production code. - private static Function forceRetryLogic() { - return ctx -> { - long rv = -1; - try { - System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n"); - ctx.execute("SELECT crdb_internal.force_retry('1s')"); - } catch (DataAccessException e) { - System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n"); - throw e; - } - return rv; - }; - } - - private static Function getAccountBalance(long id) { - return ctx -> { - AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id)); - long balance = account.getBalance(); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance); - return balance; - }; - } - - // Run SQL code in a way that automatically handles the - // transaction retry logic so we do not have to duplicate it in - // various places. - private static long runTransaction(DSLContext session, Function fn) { - AtomicLong rv = new AtomicLong(0L); - AtomicInteger attemptCount = new AtomicInteger(0); - - while (attemptCount.get() < MAX_ATTEMPT_COUNT) { - attemptCount.incrementAndGet(); - - if (attemptCount.get() > 1) { - System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get()); - } - - if (session.connectionResult(connection -> { - connection.setAutoCommit(false); - System.out.printf("APP: BEGIN;\n"); - - if (attemptCount.get() == MAX_ATTEMPT_COUNT) { - String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT); - throw new RuntimeException(err); - } - - // This block is only used to test the retry logic. - // It is not necessary in production code. See also - // the method 'testRetryLogic()'. - if (FORCE_RETRY) { - session.fetch("SELECT now()"); - } - - try { - rv.set(fn.apply(session)); - if (rv.get() != -1) { - connection.commit(); - System.out.printf("APP: COMMIT;\n"); - return true; - } - } catch (DataAccessException | SQLException e) { - String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState(); - - if (RETRY_SQL_STATE.equals(sqlState)) { - // Since this is a transaction retry error, we - // roll back the transaction and sleep a little - // before trying again. Each time through the - // loop we sleep for a little longer than the last - // time (A.K.A. exponential backoff). - System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get()); - System.out.printf("APP: ROLLBACK;\n"); - connection.rollback(); - int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100); - System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis); - try { - Thread.sleep(sleepMillis); - } catch (InterruptedException ignored) { - // no-op - } - rv.set(-1L); - } else { - throw e; - } - } - - return false; - })) { - break; - } - } - - return rv.get(); - } - - public static void main(String[] args) throws Exception { - try (Connection connection = DriverManager.getConnection( - "jdbc:postgresql://localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key.pk8&sslcert=certs/client.maxroach.crt", - "maxroach", - "" - )) { - DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings() - .withExecuteLogging(true) - .withRenderQuotedNames(RenderQuotedNames.NEVER)); - - // Initialise database with db.sql script - try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) { - ctx.parser().parse(Source.of(in).readString()).executeBatch(); - } - - long fromAccountId = 1; - long toAccountId = 2; - long transferAmount = 100; - - if (FORCE_RETRY) { - System.out.printf("APP: About to test retry logic in 'runTransaction'\n"); - runTransaction(ctx, forceRetryLogic()); - } else { - - runTransaction(ctx, addAccounts()); - long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalance = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalance != -1 && toBalance != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance); - } - - // Transfer $100 from account 1 to account 2 - long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount)); - if (transferResult != -1) { - // Success! - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult); - - long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalanceAfter != -1 && toBalanceAfter != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter); - } - } - } - } - } -} diff --git a/src/current/_includes/v21.2/app/jooq-basic-sample/jooq-basic-sample.zip b/src/current/_includes/v21.2/app/jooq-basic-sample/jooq-basic-sample.zip deleted file mode 100644 index 859305478c0..00000000000 Binary files a/src/current/_includes/v21.2/app/jooq-basic-sample/jooq-basic-sample.zip and /dev/null differ diff --git a/src/current/_includes/v21.2/app/pkcs8-gen.md b/src/current/_includes/v21.2/app/pkcs8-gen.md deleted file mode 100644 index 411d262e970..00000000000 --- a/src/current/_includes/v21.2/app/pkcs8-gen.md +++ /dev/null @@ -1,8 +0,0 @@ -You can pass the [`--also-generate-pkcs8-key` flag](cockroach-cert.html#flag-pkcs8) to [`cockroach cert`](cockroach-cert.html) to generate a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. For example, if you have the user `max`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client max --certs-dir=certs --ca-key=my-safe-directory/ca.key --also-generate-pkcs8-key -~~~ - -The generated PKCS8 key will be named `client.max.key.pk8`. diff --git a/src/current/_includes/v21.2/app/python/sqlalchemy/sqlalchemy-large-txns.py b/src/current/_includes/v21.2/app/python/sqlalchemy/sqlalchemy-large-txns.py deleted file mode 100644 index 7a6ef82c2e3..00000000000 --- a/src/current/_includes/v21.2/app/python/sqlalchemy/sqlalchemy-large-txns.py +++ /dev/null @@ -1,57 +0,0 @@ -from sqlalchemy import create_engine, Column, Float, Integer -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import sessionmaker -from cockroachdb.sqlalchemy import run_transaction -from random import random - -Base = declarative_base() - -# The code below assumes you have run the following SQL statements. - -# CREATE DATABASE pointstore; - -# USE pointstore; - -# CREATE TABLE points ( -# id INT PRIMARY KEY DEFAULT unique_rowid(), -# x FLOAT NOT NULL, -# y FLOAT NOT NULL, -# z FLOAT NOT NULL -# ); - -engine = create_engine( - # For cockroach demo: - 'cockroachdb://:@:/bank?sslmode=require', - echo=True # Log SQL queries to stdout -) - - -class Point(Base): - __tablename__ = 'points' - id = Column(Integer, primary_key=True) - x = Column(Float) - y = Column(Float) - z = Column(Float) - - -def add_points(num_points): - chunk_size = 1000 # Tune this based on object sizes. - - def add_points_helper(sess, chunk, num_points): - points = [] - for i in range(chunk, min(chunk + chunk_size, num_points)): - points.append( - Point(x=random()*1024, y=random()*1024, z=random()*1024) - ) - sess.bulk_save_objects(points) - - for chunk in range(0, num_points, chunk_size): - run_transaction( - sessionmaker(bind=engine), - lambda s: add_points_helper( - s, chunk, min(chunk + chunk_size, num_points) - ) - ) - - -add_points(10000) diff --git a/src/current/_includes/v21.2/app/retry-errors.md b/src/current/_includes/v21.2/app/retry-errors.md deleted file mode 100644 index 5f219f53e12..00000000000 --- a/src/current/_includes/v21.2/app/retry-errors.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -Your application should [use a retry loop to handle transaction errors](error-handling-and-troubleshooting.html#transaction-retry-errors) that can occur under contention. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/app/sample-setup.md b/src/current/_includes/v21.2/app/sample-setup.md deleted file mode 100644 index 913e8d460c3..00000000000 --- a/src/current/_includes/v21.2/app/sample-setup.md +++ /dev/null @@ -1,48 +0,0 @@ - -
      - - -
      - -
      - -### Create a free cluster - -{% include cockroachcloud/quickstart/create-a-free-cluster.md %} - -### Set up your cluster connection - -1. Navigate to the cluster's **Overview** page, and select **Connect**. - -1. Under the **Connection String** tab, download the cluster certificate. - -1. Take note of the connection string provided. You'll use it to connect to the database later in this tutorial. - -
      - -
      - -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Run the [`cockroach start-single-node`](cockroach-start-single-node.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --advertise-addr 'localhost' --insecure - ~~~ - - This starts an insecure, single-node cluster. -1. Take note of the following connection information in the SQL shell welcome text: - - ~~~ - CockroachDB node starting at 2021-08-30 17:25:30.06524 +0000 UTC (took 4.3s) - build: CCL v21.1.6 @ 2021/07/20 15:33:43 (go1.15.11) - webui: http://localhost:8080 - sql: postgresql://root@localhost:26257?sslmode=disable - ~~~ - - You'll use the `sql` connection string to connect to the cluster later in this tutorial. - - -{% include {{ page.version.version }}/prod-deployment/insecure-flag.md %} - -
      \ No newline at end of file diff --git a/src/current/_includes/v21.2/app/see-also-links.md b/src/current/_includes/v21.2/app/see-also-links.md deleted file mode 100644 index ee55292e744..00000000000 --- a/src/current/_includes/v21.2/app/see-also-links.md +++ /dev/null @@ -1,9 +0,0 @@ -You might also be interested in the following pages: - -- [Client Connection Parameters](connection-parameters.html) -- [Connection Pooling](connection-pooling.html) -- [Data Replication](demo-replication-and-rebalancing.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Replication & Rebalancing](demo-replication-and-rebalancing.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Automated Operations](orchestrate-a-local-cluster-with-kubernetes-insecure.html) diff --git a/src/current/_includes/v21.2/app/start-cockroachdb.md b/src/current/_includes/v21.2/app/start-cockroachdb.md deleted file mode 100644 index a3348e2c4da..00000000000 --- a/src/current/_includes/v21.2/app/start-cockroachdb.md +++ /dev/null @@ -1,58 +0,0 @@ -Choose whether to run a temporary local cluster or a free CockroachDB cluster on CockroachDB {{ site.data.products.serverless }}. The instructions below will adjust accordingly. - -
      - - -
      - -
      - -### Create a free cluster - -{% include cockroachcloud/quickstart/create-a-free-cluster.md %} - -### Set up your cluster connection - -The **Connection info** dialog shows information about how to connect to your cluster. - -1. Click the **Choose your OS** dropdown, and select the operating system of your local machine. - -1. Click the **Connection string** tab in the **Connection info** dialog. - -1. Open a new terminal on your local machine, and run the command provided in step **1** to download the CA certificate. This certificate is required by some clients connecting to CockroachDB {{ site.data.products.cloud }}. - -1. Copy the connection string provided in step **2** to a secure location. - - {{site.data.alerts.callout_info}} - The connection string is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`. - {{site.data.alerts.end}} - -
      - -
      - -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Run the [`cockroach demo`](cockroach-demo.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo \ - --no-example-database - ~~~ - - This starts a temporary, in-memory cluster and opens an interactive SQL shell to the cluster. Any changes to the database will not persist after the cluster is stopped. - - {{site.data.alerts.callout_info}} - If `cockroach demo` fails due to SSL authentication, make sure you have cleared any previously downloaded CA certificates from the directory `~/.postgresql`. - {{site.data.alerts.end}} - -1. Take note of the `(sql)` connection string in the SQL shell welcome text: - - ~~~ - # Connection parameters: - # (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo - # (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require - # (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257 - ~~~ - -
      diff --git a/src/current/_includes/v21.2/app/upperdb-basic-sample/main.go b/src/current/_includes/v21.2/app/upperdb-basic-sample/main.go deleted file mode 100644 index 3e838fe43e2..00000000000 --- a/src/current/_includes/v21.2/app/upperdb-basic-sample/main.go +++ /dev/null @@ -1,187 +0,0 @@ -package main - -import ( - "fmt" - "log" - "time" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/adapter/cockroachdb" -) - -// The settings variable stores connection details. -var settings = cockroachdb.ConnectionURL{ - Host: "localhost", - Database: "bank", - User: "maxroach", - Options: map[string]string{ - // Secure node. - "sslrootcert": "certs/ca.crt", - "sslkey": "certs/client.maxroach.key", - "sslcert": "certs/client.maxroach.crt", - }, -} - -// Accounts is a handy way to represent a collection. -func Accounts(sess db.Session) db.Store { - return sess.Collection("accounts") -} - -// Account is used to represent a single record in the "accounts" table. -type Account struct { - ID uint64 `db:"id,omitempty"` - Balance int64 `db:"balance"` -} - -// Collection is required in order to create a relation between the Account -// struct and the "accounts" table. -func (a *Account) Store(sess db.Session) db.Store { - return Accounts(sess) -} - -// createTables creates all the tables that are neccessary to run this example. -func createTables(sess db.Session) error { - _, err := sess.SQL().Exec(` - CREATE TABLE IF NOT EXISTS accounts ( - ID SERIAL PRIMARY KEY, - balance INT - ) - `) - if err != nil { - return err - } - return nil -} - -// crdbForceRetry can be used to simulate a transaction error and -// demonstrate upper/db's ability to retry the transaction automatically. -// -// By default, upper/db will retry the transaction five times, if you want -// to modify this number use: sess.SetMaxTransactionRetries(n). -// -// This is only used for demonstration purposes and not intended -// for production code. -func crdbForceRetry(sess db.Session) error { - var err error - - // The first statement in a transaction can be retried transparently on the - // server, so we need to add a placeholder statement so that our - // force_retry() statement isn't the first one. - _, err = sess.SQL().Exec(`SELECT 1`) - if err != nil { - return err - } - - // If force_retry is called during the specified interval from the beginning - // of the transaction it returns a retryable error. If not, 0 is returned - // instead of an error. - _, err = sess.SQL().Exec(`SELECT crdb_internal.force_retry('1s'::INTERVAL)`) - if err != nil { - return err - } - - return nil -} - -func main() { - // Connect to the local CockroachDB node. - sess, err := cockroachdb.Open(settings) - if err != nil { - log.Fatal("cockroachdb.Open: ", err) - } - defer sess.Close() - - // Adjust this number to fit your specific needs (set to 5, by default) - // sess.SetMaxTransactionRetries(10) - - // Create the "accounts" table - createTables(sess) - - // Delete all the previous items in the "accounts" table. - err = Accounts(sess).Truncate() - if err != nil { - log.Fatal("Truncate: ", err) - } - - // Create a new account with a balance of 1000. - account1 := Account{Balance: 1000} - err = Accounts(sess).InsertReturning(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Create a new account with a balance of 250. - account2 := Account{Balance: 250} - err = Accounts(sess).InsertReturning(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Change the balance of the first account. - account1.Balance = 500 - err = sess.Save(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Change the balance of the second account. - account2.Balance = 999 - err = sess.Save(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Delete the first record. - err = sess.Delete(&account1) - if err != nil { - log.Fatal("Delete: ", err) - } - - startTime := time.Now() - - // Add a couple of new records within a transaction. - err = sess.Tx(func(tx db.Session) error { - var err error - - if err = tx.Save(&Account{Balance: 887}); err != nil { - return err - } - - if time.Now().Sub(startTime) < time.Second*1 { - // Will fail continuously for 2 seconds. - if err = crdbForceRetry(tx); err != nil { - return err - } - } - - if err = tx.Save(&Account{Balance: 342}); err != nil { - return err - } - - return nil - }) - if err != nil { - log.Fatal("Could not commit transaction: ", err) - } - - // Printing records - printRecords(sess) -} - -func printRecords(sess db.Session) { - accounts := []Account{} - err := Accounts(sess).Find().All(&accounts) - if err != nil { - log.Fatal("Find: ", err) - } - log.Printf("Balances:") - for i := range accounts { - fmt.Printf("\taccounts[%d]: %d\n", accounts[i].ID, accounts[i].Balance) - } -} diff --git a/src/current/_includes/v21.2/backups/advanced-examples-list.md b/src/current/_includes/v21.2/backups/advanced-examples-list.md deleted file mode 100644 index d6ace4c8a31..00000000000 --- a/src/current/_includes/v21.2/backups/advanced-examples-list.md +++ /dev/null @@ -1,9 +0,0 @@ -For examples of advanced `BACKUP` and `RESTORE` use cases, see: - -- [Incremental backups with a specified destination](take-full-and-incremental-backups.html#incremental-backups-with-explicitly-specified-destinations) -- [Backup with revision history and point-in-time restore](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) -- [Locality-aware backup and restore](take-and-restore-locality-aware-backups.html) -- [Encrypted backup and restore](take-and-restore-encrypted-backups.html) -- [Restore into a different database](restore.html#restore-tables-into-a-different-database) -- [Remove the foreign key before restore](restore.html#remove-the-foreign-key-before-restore) -- [Restoring users from `system.users` backup](restore.html#restoring-users-from-system-users-backup) diff --git a/src/current/_includes/v21.2/backups/aws-auth-note.md b/src/current/_includes/v21.2/backups/aws-auth-note.md deleted file mode 100644 index 759a8ad1d3a..00000000000 --- a/src/current/_includes/v21.2/backups/aws-auth-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The examples in this section use the **default** `AUTH=specified` parameter. For more detail on how to use `implicit` authentication with Amazon S3 buckets, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/backups/backup-options.md b/src/current/_includes/v21.2/backups/backup-options.md deleted file mode 100644 index de2c2bde4e9..00000000000 --- a/src/current/_includes/v21.2/backups/backup-options.md +++ /dev/null @@ -1,6 +0,0 @@ - Option | Value | Description ------------------------------------------------------------------+-------------------------+------------------------------ -`revision_history` | N/A | Create a backup with full [revision history](take-backups-with-revision-history-and-restore-from-a-point-in-time.html), which records every change made to the cluster within the garbage collection period leading up to and including the given timestamp. -`encryption_passphrase` | [`STRING`](string.html) | The passphrase used to [encrypt the files](take-and-restore-encrypted-backups.html) (`BACKUP` manifest and data files) that the `BACKUP` statement generates. This same passphrase is needed to decrypt the file when it is used to [restore](take-and-restore-encrypted-backups.html) and to list the contents of the backup when using [`SHOW BACKUP`](show-backup.html). There is no practical limit on the length of the passphrase. -`DETACHED` | N/A | When a backup runs in `DETACHED` mode, it will execute asynchronously. The job ID will be returned after the backup job creation completes. Note that with `DETACHED` specified, further job information and the job completion status will not be returned. For more on the differences between the returned job data, see the [example](backup.html#run-a-backup-asynchronously) below. To check on the job status, use the [`SHOW JOBS`](show-jobs.html) statement.

      To run a backup within a [transaction](transactions.html), use the `DETACHED` option. -`kms` | [`STRING`](string.html) | The [key management service (KMS) URI](take-and-restore-encrypted-backups.html#aws-kms-uri-format) (or a [comma-separated list of URIs](take-and-restore-encrypted-backups.html#take-a-backup-with-multi-region-encryption)) used to encrypt the files (`BACKUP` manifest and data files) that the `BACKUP` statement generates. This same KMS URI is needed to decrypt the file when it is used to [restore](take-and-restore-encrypted-backups.html#restore-from-an-encrypted-backup-with-aws-kms) and to list the contents of the backup when using [`SHOW BACKUP`](show-backup.html).

      Currently, only AWS KMS is supported. diff --git a/src/current/_includes/v21.2/backups/bulk-auth-options.md b/src/current/_includes/v21.2/backups/bulk-auth-options.md deleted file mode 100644 index 14c9298f024..00000000000 --- a/src/current/_includes/v21.2/backups/bulk-auth-options.md +++ /dev/null @@ -1,4 +0,0 @@ -The following examples make use of: - -- Amazon S3 connection strings. For guidance on connecting to other storage options or using other authentication parameters instead, read [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html#example-file-urls). -- The **default** `AUTH=specified` parameter. For guidance on using `AUTH=implicit` authentication with Amazon S3 buckets instead, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication). diff --git a/src/current/_includes/v21.2/backups/destination-file-privileges.md b/src/current/_includes/v21.2/backups/destination-file-privileges.md deleted file mode 100644 index 913e042461c..00000000000 --- a/src/current/_includes/v21.2/backups/destination-file-privileges.md +++ /dev/null @@ -1,12 +0,0 @@ -The destination file URL does **not** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default. -- [Userfile](use-userfile-for-bulk-operations.html) - -The destination file URL **does** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 or GS using `IMPLICIT` credentials -- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3 -- [Nodelocal](cockroach-nodelocal-upload.html) - -We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). diff --git a/src/current/_includes/v21.2/backups/encrypted-backup-description.md b/src/current/_includes/v21.2/backups/encrypted-backup-description.md deleted file mode 100644 index f0c39d2551a..00000000000 --- a/src/current/_includes/v21.2/backups/encrypted-backup-description.md +++ /dev/null @@ -1,11 +0,0 @@ -You can encrypt full or incremental backups with a passphrase by using the [`encryption_passphrase` option](backup.html#with-encryption-passphrase). Files written by the backup (including `BACKUP` manifests and data files) are encrypted using the specified passphrase to derive a key. To restore the encrypted backup, the same `encryption_passphrase` option (with the same passphrase) must be included in the [`RESTORE`](restore.html) statement. - -When used with [incremental backups](take-full-and-incremental-backups.html#incremental-backups), the `encryption_passphrase` option is applied to all the [backup file URLs](backup.html#backup-file-urls), which means the same passphrase must be used when appending another incremental backup to an existing backup. Similarly, when used with [locality-aware backups](take-and-restore-locality-aware-backups.html), the passphrase provided is applied to files in all localities. - -Encryption is done using [AES-256-GCM](https://en.wikipedia.org/wiki/Galois/Counter_Mode), and GCM is used to both encrypt and authenticate the files. A random [salt](https://en.wikipedia.org/wiki/Salt_(cryptography)) is used to derive a once-per-backup [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) key from the specified passphrase, and then a random [initialization vector](https://en.wikipedia.org/wiki/Initialization_vector) is used per-file. CockroachDB uses [PBKDF2](https://en.wikipedia.org/wiki/PBKDF2) with 64,000 iterations for the key derivation. - -{{site.data.alerts.callout_info}} -`BACKUP` and `RESTORE` will use more memory when using encryption, as both the plain-text and cipher-text of a given file are held in memory during encryption and decryption. -{{site.data.alerts.end}} - -For an example of an encrypted backup, see [Create an encrypted backup](take-and-restore-encrypted-backups.html#take-an-encrypted-backup-using-a-passphrase). diff --git a/src/current/_includes/v21.2/backups/file-size-setting.md b/src/current/_includes/v21.2/backups/file-size-setting.md deleted file mode 100644 index 8f94d415e11..00000000000 --- a/src/current/_includes/v21.2/backups/file-size-setting.md +++ /dev/null @@ -1,5 +0,0 @@ -{{site.data.alerts.callout_info}} -To set a target for the amount of backup data written to each backup file, use the `bulkio.backup.file_size` [cluster setting](cluster-settings.html). - -See the [`SET CLUSTER SETTING`](set-cluster-setting.html) page for more details on using cluster settings. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/backups/gcs-auth-note.md b/src/current/_includes/v21.2/backups/gcs-auth-note.md deleted file mode 100644 index 360ea21cb63..00000000000 --- a/src/current/_includes/v21.2/backups/gcs-auth-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The examples in this section use the `AUTH=specified` parameter, which will be the default behavior in v21.2 and beyond for connecting to Google Cloud Storage. For more detail on how to pass your Google Cloud Storage credentials with this parameter, or, how to use `implicit` authentication, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/backups/gcs-default-deprec.md b/src/current/_includes/v21.2/backups/gcs-default-deprec.md deleted file mode 100644 index aafea15e804..00000000000 --- a/src/current/_includes/v21.2/backups/gcs-default-deprec.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -**Deprecation notice:** Currently, GCS connections default to the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html). This default behavior will no longer be supported in v21.2. If you are relying on this default behavior, we recommend adjusting your queries and scripts to now specify the `AUTH` parameter you want to use. Similarly, if you are using the `cloudstorage.gs.default.key` cluster setting to authorize your GCS connection, we recommend switching to use `AUTH=specified` or `AUTH=implicit`. `AUTH=specified` will be the default behavior in v21.2 and beyond. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/backups/no-incremental-restore.md b/src/current/_includes/v21.2/backups/no-incremental-restore.md deleted file mode 100644 index b2f071c1e5e..00000000000 --- a/src/current/_includes/v21.2/backups/no-incremental-restore.md +++ /dev/null @@ -1 +0,0 @@ -When you restore from an incremental backup, you're restoring the **entire** table, database, or cluster. CockroachDB uses both the latest (or a [specific](restore.html#restore-a-specific-backup)) incremental backup and the full backup during this process. You cannot restore an incremental backup without a full backup. Furthermore, it is not possible to restore over a [table](restore.html#tables), [database](restore.html#databases), or [cluster](restore.html#full-cluster) with existing data. See [Restore types](restore.html#restore-types) for detail on the types of backups you can restore. diff --git a/src/current/_includes/v21.2/cdc/avro-limitations.md b/src/current/_includes/v21.2/cdc/avro-limitations.md deleted file mode 100644 index feb5a48145d..00000000000 --- a/src/current/_includes/v21.2/cdc/avro-limitations.md +++ /dev/null @@ -1,29 +0,0 @@ -- [Decimals](decimal.html) must have precision specified. -- [`BYTES`](bytes.html) (or its aliases `BYTEA` and `BLOB`) are often used to store machine-readable data. When you stream these types through a changefeed with [`format=avro`](create-changefeed.html#format), CockroachDB does not encode or change the data. However, Avro clients can often include escape sequences to present the data in a printable format, which can interfere with deserialization. A potential solution is to hex-encode `BYTES` values when initially inserting them into CockroachDB. This will ensure that Avro clients will consistently decode the hexadecimal. Note that hex-encoding values at insertion process will increase record size. -- [`BIT`](bit.html) and [`VARBIT`](bit.html) types are encoded as arrays of 64-bit integers. - - For efficiency, CockroachDB encodes `BIT` and `VARBIT` bitfield types as arrays of 64-bit integers. That is, [base-2 (binary format)](https://en.wikipedia.org/wiki/Binary_number#Conversion_to_and_from_other_numeral_systems) `BIT` and `VARBIT` data types are converted to base 10 and stored in arrays. Encoding in CockroachDB is [big-endian](https://en.wikipedia.org/wiki/Endianness), therefore the last value may have many trailing zeroes. For this reason, the first value of each array is the number of bits that are used in the last value of the array. - - For instance, if the bitfield is 129 bits long, there will be 4 integers in the array. The first integer will be `1`; representing the number of bits in the last value, the second integer will be the first 64 bits, the third integer will be bits 65–128, and the last integer will either be `0` or `9223372036854775808` (i.e., the integer with only the first bit set, or `1000000000000000000000000000000000000000000000000000000000000000` when base 2). - - This example is base-10 encoded into an array as follows: - - ~~~ - {"array": [1, , , 0 or 9223372036854775808]} - ~~~ - - For downstream processing, it is necessary to base-2 encode every element in the array (except for the first element). The first number in the array gives you the number of bits to take from the last base-2 number — that is, the most significant bits. So, in the example above this would be `1`. Finally, all the base-2 numbers can be appended together, which will result in the original number of bits, 129. - - In a different example of this process where the bitfield is 136 bits long, the array would be similar to the following when base-10 encoded: - - ~~~ - {"array": [8, 18293058736425533439, 18446744073709551615, 13690942867206307840]} - ~~~ - - To then work with this data, you would convert each of the elements in the array to base-2 numbers, besides the first element. For the above array, this would convert to: - - ~~~ - [8, 1111110111011011111111111111111111111111111111111111111111111111, 1111111111111111111111111111111111111111111111111111111111111111, 1011111000000000000000000000000000000000000000000000000000000000] - ~~~ - - Next, you use the first element in the array to take the number of bits from the last base-2 element, `10111110`. Finally, you append each of the base-2 numbers together — in the above array, the second, third, and truncated last element. This results in 136 bits, the original number of bits. diff --git a/src/current/_includes/v21.2/cdc/client-key-encryption.md b/src/current/_includes/v21.2/cdc/client-key-encryption.md deleted file mode 100644 index c7c7be4c38c..00000000000 --- a/src/current/_includes/v21.2/cdc/client-key-encryption.md +++ /dev/null @@ -1 +0,0 @@ -**Note:** Client keys are often encrypted. You will receive an error if you pass an encrypted client key in your changefeed statement. To decrypt the client key, run: `openssl rsa -in key.pem -out key.decrypt.pem -passin pass:{PASSWORD}`. Once decrypted, be sure to update your changefeed statement to use the new `key.decrypt.pem` file instead. \ No newline at end of file diff --git a/src/current/_includes/v21.2/cdc/configure-all-changefeed.md b/src/current/_includes/v21.2/cdc/configure-all-changefeed.md deleted file mode 100644 index 0c90c328d50..00000000000 --- a/src/current/_includes/v21.2/cdc/configure-all-changefeed.md +++ /dev/null @@ -1,19 +0,0 @@ -It is useful to be able to pause all running changefeeds during troubleshooting, testing, or when a decrease in CPU load is needed. - -To pause all running changefeeds: - -{% include_cached copy-clipboard.html %} -~~~sql -PAUSE JOBS (SELECT * FROM [SHOW CHANGEFEED JOBS] WHERE status = ('running')); -~~~ - -This will change the status for each of the running changefeeds to `paused`, which can be verified with [`SHOW CHANGEFEED JOBS`](show-jobs.html#show-changefeed-jobs). - -To resume all running changefeeds: - -{% include_cached copy-clipboard.html %} -~~~sql -RESUME JOBS (SELECT * FROM [SHOW CHANGEFEED JOBS] WHERE status = ('paused')); -~~~ - -This will resume the changefeeds and update the status for each of the changefeeds to `running`. diff --git a/src/current/_includes/v21.2/cdc/confluent-cloud-sr-url.md b/src/current/_includes/v21.2/cdc/confluent-cloud-sr-url.md deleted file mode 100644 index d9d5691b57f..00000000000 --- a/src/current/_includes/v21.2/cdc/confluent-cloud-sr-url.md +++ /dev/null @@ -1 +0,0 @@ -To connect to Confluent Cloud, use the following URL structure: `'https://{API_KEY_ID}:{API_SECRET_URL_ENCODED}@{CONFLUENT_REGISTRY_URL}:443'`. See the [Confluent Cloud Schema Registry Tutorial](https://docs.confluent.io/platform/current/schema-registry/schema_registry_ccloud_tutorial.html) for further detail. \ No newline at end of file diff --git a/src/current/_includes/v21.2/cdc/core-csv.md b/src/current/_includes/v21.2/cdc/core-csv.md deleted file mode 100644 index 4ee6bfc587d..00000000000 --- a/src/current/_includes/v21.2/cdc/core-csv.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -To determine how wide the columns need to be, the default `table` display format in `cockroach sql` buffers the results it receives from the server before printing them to the console. When consuming core changefeed data using `cockroach sql`, it's important to use a display format like `csv` that does not buffer its results. To set the display format, use the [`--format=csv` flag](cockroach-sql.html#sql-flag-format) when starting the [built-in SQL client](cockroach-sql.html), or set the [`\set display_format=csv` option](cockroach-sql.html#client-side-options) once the SQL client is open. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/cdc/core-url.md b/src/current/_includes/v21.2/cdc/core-url.md deleted file mode 100644 index 7241e203aa7..00000000000 --- a/src/current/_includes/v21.2/cdc/core-url.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a [`CANCEL QUERY`](cancel-query.html) statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/cdc/create-core-changefeed-avro.md b/src/current/_includes/v21.2/cdc/create-core-changefeed-avro.md deleted file mode 100644 index 14051253a22..00000000000 --- a/src/current/_includes/v21.2/cdc/create-core-changefeed-avro.md +++ /dev/null @@ -1,122 +0,0 @@ -In this example, you'll set up a core changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html) to store Avro schemas. - -1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node \ - --insecure \ - --listen-addr=localhost \ - --background - ~~~ - -2. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/). - -3. Move into the extracted `confluent-` directory and start Confluent: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent local services start - ~~~ - - Only `zookeeper`, `kafka`, and `schema-registry` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives) and the [Quick Start Guide](https://docs.confluent.io/platform/current/quickstart/ce-quickstart.html#ce-quickstart). - -4. As the `root` user, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --url="postgresql://root@127.0.0.1:26257?sslmode=disable" --format=csv - ~~~ - - {% include {{ page.version.version }}/cdc/core-url.md %} - - {% include {{ page.version.version }}/cdc/core-csv.md %} - -5. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -6. Create table `bar`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bar (a INT PRIMARY KEY); - ~~~ - -7. Insert a row into the table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO bar VALUES (0); - ~~~ - -8. Start the core changefeed: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > EXPERIMENTAL CHANGEFEED FOR bar WITH format = avro, confluent_schema_registry = 'http://localhost:8081'; - ~~~ - - ~~~ - table,key,value - bar,\000\000\000\000\001\002\000,\000\000\000\000\002\002\002\000 - ~~~ - -9. In a new terminal, add another row: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure -e "INSERT INTO bar VALUES (1)" - ~~~ - -10. Back in the terminal where the core changefeed is streaming, the output will appear: - - ~~~ - bar,\000\000\000\000\001\002\002,\000\000\000\000\002\002\002\002 - ~~~ - - Note that records may take a couple of seconds to display in the core changefeed. - -11. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running. - -12. To stop `cockroach`: - - Get the process ID of the node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ps -ef | grep cockroach | grep -v grep - ~~~ - - ~~~ - 501 21766 1 0 6:21PM ttys001 0:00.89 cockroach start-single-node --insecure --listen-addr=localhost - ~~~ - - Gracefully shut down the node, specifying its process ID: - - {% include_cached copy-clipboard.html %} - ~~~ shell - kill -TERM 21766 - ~~~ - - ~~~ - initiating graceful shutdown of server - server drained and shutdown completed - ~~~ - -13. To stop Confluent, move into the extracted `confluent-` directory and stop Confluent: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent local services stop - ~~~ - - To terminate all Confluent processes, use: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent local destroy - ~~~ diff --git a/src/current/_includes/v21.2/cdc/create-core-changefeed.md b/src/current/_includes/v21.2/cdc/create-core-changefeed.md deleted file mode 100644 index fa397cd36f5..00000000000 --- a/src/current/_includes/v21.2/cdc/create-core-changefeed.md +++ /dev/null @@ -1,98 +0,0 @@ -In this example, you'll set up a core changefeed for a single-node cluster. - -1. In a terminal window, start `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node \ - --insecure \ - --listen-addr=localhost \ - --background - ~~~ - -2. As the `root` user, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --url="postgresql://root@127.0.0.1:26257?sslmode=disable" \ - --format=csv - ~~~ - - {% include {{ page.version.version }}/cdc/core-url.md %} - - {% include {{ page.version.version }}/cdc/core-csv.md %} - -3. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -4. Create table `foo`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE foo (a INT PRIMARY KEY); - ~~~ - -5. Insert a row into the table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO foo VALUES (0); - ~~~ - -6. Start the core changefeed: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > EXPERIMENTAL CHANGEFEED FOR foo; - ~~~ - ~~~ - table,key,value - foo,[0],"{""after"": {""a"": 0}}" - ~~~ - -7. In a new terminal, add another row: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure -e "INSERT INTO foo VALUES (1)" - ~~~ - -8. Back in the terminal where the core changefeed is streaming, the following output has appeared: - - ~~~ - foo,[1],"{""after"": {""a"": 1}}" - ~~~ - - Note that records may take a couple of seconds to display in the core changefeed. - -9. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running. - -10. To stop `cockroach`: - - Get the process ID of the node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - ps -ef | grep cockroach | grep -v grep - ~~~ - - ~~~ - 501 21766 1 0 6:21PM ttys001 0:00.89 cockroach start-single-node --insecure --listen-addr=localhost - ~~~ - - Gracefully shut down the node, specifying its process ID: - - {% include_cached copy-clipboard.html %} - ~~~ shell - kill -TERM 21766 - ~~~ - - ~~~ - initiating graceful shutdown of server - server drained and shutdown completed - ~~~ diff --git a/src/current/_includes/v21.2/cdc/create-example-db-cdc.md b/src/current/_includes/v21.2/cdc/create-example-db-cdc.md deleted file mode 100644 index 17902b10eac..00000000000 --- a/src/current/_includes/v21.2/cdc/create-example-db-cdc.md +++ /dev/null @@ -1,50 +0,0 @@ -1. Create a database called `cdc_demo`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE cdc_demo; - ~~~ - -1. Set the database as the default: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET DATABASE = cdc_demo; - ~~~ - -1. Create a table and add data: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - name STRING); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO office_dogs VALUES - (1, 'Petee'), - (2, 'Carl'); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE office_dogs SET name = 'Petee H' WHERE id = 1; - ~~~ - -1. Create another table and add data: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE employees ( - dog_id INT REFERENCES office_dogs (id), - employee_name STRING); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO employees VALUES - (1, 'Lauren'), - (2, 'Spencer'); - ~~~ diff --git a/src/current/_includes/v21.2/cdc/external-urls.md b/src/current/_includes/v21.2/cdc/external-urls.md deleted file mode 100644 index f4aa029779a..00000000000 --- a/src/current/_includes/v21.2/cdc/external-urls.md +++ /dev/null @@ -1,48 +0,0 @@ -~~~ -[scheme]://[host]/[path]?[parameters] -~~~ - -Location | Scheme | Host | Parameters | -|-------------------------------------------------------------+-------------+--------------------------------------------------+---------------------------------------------------------------------------- -Amazon | `s3` | Bucket name | `AUTH` [1](#considerations) (optional; can be `implicit` or `specified`), `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` -Azure | `azure` | N/A (see [Example file URLs](#example-file-urls) | `AZURE_ACCOUNT_KEY`, `AZURE_ACCOUNT_NAME` -Google Cloud [2](#considerations) | `gs` | Bucket name | `AUTH` (optional; can be `default`, `implicit`, or `specified`), `CREDENTIALS` -HTTP [3](#considerations) | `http` | Remote host | N/A -NFS/Local [4](#considerations) | `nodelocal` | `nodeID` or `self` [5](#considerations) (see [Example file URLs](#example-file-urls)) | N/A -S3-compatible services [6](#considerations) | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION` [7](#considerations) (optional), `AWS_ENDPOINT` - -{{site.data.alerts.callout_info}} -The location parameters often contain special characters that need to be URI-encoded. Use Javascript's [`encodeURIComponent`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [`url.QueryEscape`](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -If your environment requires an HTTP or HTTPS proxy server for outgoing connections, you can set the standard `HTTP_PROXY` and `HTTPS_PROXY` environment variables when starting CockroachDB. - - If you cannot run a full proxy, you can disable external HTTP(S) access (as well as custom HTTP(S) endpoints) when performing bulk operations (e.g., [`BACKUP`](backup.html), [`RESTORE`](restore.html), etc.) by using the [`--external-io-disable-http` flag](cockroach-start.html#security). You can also disable the use of implicit credentials when accessing external cloud storage services for various bulk operations by using the [`--external-io-disable-implicit-credentials` flag](cockroach-start.html#security). -{{site.data.alerts.end}} - - - -- 1 If the `AUTH` parameter is not provided, AWS connections default to `specified` and the access keys must be provided in the URI parameters. If the `AUTH` parameter is `implicit`, the access keys can be omitted and [the credentials will be loaded from the environment](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/). - -- 2 If the `AUTH` parameter is not specified, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) will be used if it is non-empty, otherwise the `implicit` behavior is used. If the `AUTH` parameter is `implicit`, all GCS connections use Google's [default authentication strategy](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). If the `AUTH` parameter is `default`, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) must be set to the contents of a [service account file](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) which will be used during authentication. If the `AUTH` parameter is `specified`, GCS connections are authenticated on a per-statement basis, which allows the JSON key object to be sent in the `CREDENTIALS` parameter. The JSON key object should be Base64-encoded (using the standard encoding in [RFC 4648](https://tools.ietf.org/html/rfc4648)). - -- 3 You can create your own HTTP server with [Caddy or nginx](use-a-local-file-server-for-bulk-operations.html). A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from HTTPS URLs. - -- 4 The file system backup location on the NFS drive is relative to the path specified by the `--external-io-dir` flag set while [starting the node](cockroach-start.html). If the flag is set to `disabled`, then imports from local directories and NFS drives are disabled. - -- 5 Using a `nodeID` is required and the data files will be in the `extern` directory of the specified node. In most cases (including single-node clusters), using `nodelocal://1/` is sufficient. Use `self` if you do not want to specify a `nodeID`, and the individual data files will be in the `extern` directories of arbitrary nodes; however, to work correctly, each node must have the [`--external-io-dir` flag](cockroach-start.html#general) point to the same NFS mount or other network-backed, shared storage. - -- 6 A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from an S3-compatible service. - -- 7 The `AWS_REGION` parameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it. - -#### Example file URLs - -Location | Example --------------+---------------------------------------------------------------------------------- -Amazon S3 | `s3://acme-co/employees?AWS_ACCESS_KEY_ID=123&AWS_SECRET_ACCESS_KEY=456` -Azure | `azure://employees?AZURE_ACCOUNT_KEY=123&AZURE_ACCOUNT_NAME=acme-co` -Google Cloud | `gs://acme-co` -HTTP | `http://localhost:8080/employees` -NFS/Local | `nodelocal://1/path/employees`, `nodelocal://self/nfsmount/backups/employees` [5](#considerations) diff --git a/src/current/_includes/v21.2/cdc/metrics-labels.md b/src/current/_includes/v21.2/cdc/metrics-labels.md deleted file mode 100644 index 8e65cb35eb8..00000000000 --- a/src/current/_includes/v21.2/cdc/metrics-labels.md +++ /dev/null @@ -1,10 +0,0 @@ -To measure metrics per changefeed, define a "metrics label" to which one or multiple changefeed(s) will increment each [changefeed metric](monitor-and-debug-changefeeds.html#metrics). Metrics label information is sent with time-series metrics to `http://{host}:{http-port}/_status/vars`, viewable via the [Prometheus endpoint](monitoring-and-alerting.html#prometheus-endpoint). An aggregated metric of all changefeeds is also measured. - -It is necessary to consider the following when applying metrics labels to changefeeds: - -- Metrics labels are **not** available in CockroachDB {{ site.data.products.cloud }}. -- The `COCKROACH_EXPERIMENTAL_ENABLE_PER_CHANGEFEED_METRICS` [environment variable](cockroach-commands.html#environment-variables) must be specified to use this feature. -- The `server.child_metrics.enabled` [cluster setting](cluster-settings.html) must be set to `true` before using the `metrics_label` option. -- Metrics label information is sent to the `_status/vars` endpoint, but will **not** show up in [`debug.zip`](cockroach-debug-zip.html) or the [DB Console](ui-overview.html). -- Introducing labels to isolate a changefeed's metrics can increase cardinality significantly. There is a limit of 1024 unique labels in place to prevent cardinality explosion. That is, when labels are applied to high-cardinality data (data with a higher number of unique values), each changefeed with a label then results in more metrics data to multiply together, which will grow over time. This will have an impact on performance as the metric-series data per changefeed quickly populates against its label. -- The maximum length of a metrics label is 128 bytes. diff --git a/src/current/_includes/v21.2/cdc/options-table-note.md b/src/current/_includes/v21.2/cdc/options-table-note.md deleted file mode 100644 index 61a27aefcc0..00000000000 --- a/src/current/_includes/v21.2/cdc/options-table-note.md +++ /dev/null @@ -1 +0,0 @@ -This table shows the parameters for changefeeds to a specific sink. The `CREATE CHANGEFEED` page provides a list of all the available [options](create-changefeed.html#options). diff --git a/src/current/_includes/v21.2/cdc/print-key.md b/src/current/_includes/v21.2/cdc/print-key.md deleted file mode 100644 index ab0b0924d30..00000000000 --- a/src/current/_includes/v21.2/cdc/print-key.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -This example only prints the value. To print both the key and value of each message in the changefeed (e.g., to observe what happens with `DELETE`s), use the `--property print.key=true` flag. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/cdc/sql-cluster-settings-example.md b/src/current/_includes/v21.2/cdc/sql-cluster-settings-example.md deleted file mode 100644 index 17e353f7ab2..00000000000 --- a/src/current/_includes/v21.2/cdc/sql-cluster-settings-example.md +++ /dev/null @@ -1,25 +0,0 @@ -1. As the `root` user, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -1. Set your organization name and [{{ site.data.products.enterprise }} license](enterprise-licensing.html) key that you received via email: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.organization = ''; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING enterprise.license = ''; - ~~~ - -1. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ diff --git a/src/current/_includes/v21.2/cdc/url-encoding.md b/src/current/_includes/v21.2/cdc/url-encoding.md deleted file mode 100644 index 2a681d7f913..00000000000 --- a/src/current/_includes/v21.2/cdc/url-encoding.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -Parameters should always be URI-encoded before they are included the changefeed's URI, as they often contain special characters. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/cdc/webhook-beta.md b/src/current/_includes/v21.2/cdc/webhook-beta.md deleted file mode 100644 index 04e0537e845..00000000000 --- a/src/current/_includes/v21.2/cdc/webhook-beta.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The webhook sink is currently in **beta** — see [usage considerations](changefeed-sinks.html#webhook-sink), available [parameters](create-changefeed.html#parameters), and [options](create-changefeed.html#options) for more information. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/client-transaction-retry.md b/src/current/_includes/v21.2/client-transaction-retry.md deleted file mode 100644 index 6a54534169e..00000000000 --- a/src/current/_includes/v21.2/client-transaction-retry.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` [isolation level](transactions.html#isolation-levels), CockroachDB may require the client to [retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a [generic retry function](transactions.html#client-side-intervention) that runs inside a transaction and retries it as needed. The code sample below shows how it is used. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/computed-columns/add-computed-column.md b/src/current/_includes/v21.2/computed-columns/add-computed-column.md deleted file mode 100644 index 5eff580e575..00000000000 --- a/src/current/_includes/v21.2/computed-columns/add-computed-column.md +++ /dev/null @@ -1,55 +0,0 @@ -In this example, create a table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE x ( - a INT NULL, - b INT NULL AS (a * 2) STORED, - c INT NULL AS (a + 4) STORED, - FAMILY "primary" (a, b, rowid, c) - ); -~~~ - -Then, insert a row of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO x VALUES (6); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM x; -~~~ - -~~~ -+---+----+----+ -| a | b | c | -+---+----+----+ -| 6 | 12 | 10 | -+---+----+----+ -(1 row) -~~~ - -Now add another virtual computed column to the table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE x ADD COLUMN d INT AS (a // 2) VIRTUAL; -~~~ - -The `d` column is added to the table and computed from the `a` column divided by 2. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM x; -~~~ - -~~~ -+---+----+----+---+ -| a | b | c | d | -+---+----+----+---+ -| 6 | 12 | 10 | 3 | -+---+----+----+---+ -(1 row) -~~~ diff --git a/src/current/_includes/v21.2/computed-columns/alter-computed-column.md b/src/current/_includes/v21.2/computed-columns/alter-computed-column.md deleted file mode 100644 index 0c554f1c630..00000000000 --- a/src/current/_includes/v21.2/computed-columns/alter-computed-column.md +++ /dev/null @@ -1,76 +0,0 @@ -To alter the formula for a computed column, you must [`DROP`](drop-column.html) and [`ADD`](add-column.html) the column back with the new definition. Take the following table for instance: - -{% include_cached copy-clipboard.html %} -~~~sql -> CREATE TABLE x ( -a INT NULL, -b INT NULL AS (a * 2) STORED, -c INT NULL AS (a + 4) STORED, -FAMILY "primary" (a, b, rowid, c) -); -~~~ -~~~ -CREATE TABLE - - -Time: 4ms total (execution 4ms / network 0ms) -~~~ - -Add a computed column `d`: - -{% include_cached copy-clipboard.html %} -~~~sql -> ALTER TABLE x ADD COLUMN d INT AS (a // 2) STORED; -~~~ -~~~ -ALTER TABLE - - -Time: 199ms total (execution 199ms / network 0ms) -~~~ - -If you try to alter it, you'll get an error: - -{% include_cached copy-clipboard.html %} -~~~sql -> ALTER TABLE x ALTER COLUMN d INT AS (a // 3) STORED; -~~~ -~~~ -invalid syntax: statement ignored: at or near "int": syntax error -SQLSTATE: 42601 -DETAIL: source SQL: -ALTER TABLE x ALTER COLUMN d INT AS (a // 3) STORED - ^ -HINT: try \h ALTER TABLE -~~~ - -However, you can drop it and then add it with the new definition: - -{% include_cached copy-clipboard.html %} -~~~sql -> SET sql_safe_updates = false; -> ALTER TABLE x DROP COLUMN d; -> ALTER TABLE x ADD COLUMN d INT AS (a // 3) STORED; -> SET sql_safe_updates = true; -~~~ -~~~ -SET - - -Time: 1ms total (execution 0ms / network 0ms) - -ALTER TABLE - - -Time: 195ms total (execution 195ms / network 0ms) - -ALTER TABLE - - -Time: 186ms total (execution 185ms / network 0ms) - -SET - - -Time: 0ms total (execution 0ms / network 0ms) -~~~ diff --git a/src/current/_includes/v21.2/computed-columns/convert-computed-column.md b/src/current/_includes/v21.2/computed-columns/convert-computed-column.md deleted file mode 100644 index 2c9897b8319..00000000000 --- a/src/current/_includes/v21.2/computed-columns/convert-computed-column.md +++ /dev/null @@ -1,108 +0,0 @@ -You can convert a stored, computed column into a regular column by using `ALTER TABLE`. - -In this example, create a simple table with a computed column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - first_name STRING, - last_name STRING, - full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED - ); -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO office_dogs (id, first_name, last_name) VALUES - (1, 'Petee', 'Hirata'), - (2, 'Carl', 'Kimball'), - (3, 'Ernie', 'Narayan'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM office_dogs; -~~~ - -~~~ -+----+------------+-----------+---------------+ -| id | first_name | last_name | full_name | -+----+------------+-----------+---------------+ -| 1 | Petee | Hirata | Petee Hirata | -| 2 | Carl | Kimball | Carl Kimball | -| 3 | Ernie | Narayan | Ernie Narayan | -+----+------------+-----------+---------------+ -(3 rows) -~~~ - -The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). You can view the column details with the [`SHOW COLUMNS`](show-columns.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM office_dogs; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| first_name | STRING | true | NULL | | {} | -| last_name | STRING | true | NULL | | {} | -| full_name | STRING | true | NULL | concat(first_name, ' ', last_name) | {} | -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -(4 rows) -~~~ - -Now, convert the computed column (`full_name`) to a regular column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE office_dogs ALTER COLUMN full_name DROP STORED; -~~~ - -Check that the computed column was converted: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM office_dogs; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| first_name | STRING | true | NULL | | {} | -| last_name | STRING | true | NULL | | {} | -| full_name | STRING | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -(4 rows) -~~~ - -The computed column is now a regular column and can be updated as such: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO office_dogs (id, first_name, last_name, full_name) VALUES (4, 'Lola', 'McDog', 'This is not computed'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM office_dogs; -~~~ - -~~~ -+----+------------+-----------+----------------------+ -| id | first_name | last_name | full_name | -+----+------------+-----------+----------------------+ -| 1 | Petee | Hirata | Petee Hirata | -| 2 | Carl | Kimball | Carl Kimball | -| 3 | Ernie | Narayan | Ernie Narayan | -| 4 | Lola | McDog | This is not computed | -+----+------------+-----------+----------------------+ -(4 rows) -~~~ diff --git a/src/current/_includes/v21.2/computed-columns/jsonb.md b/src/current/_includes/v21.2/computed-columns/jsonb.md deleted file mode 100644 index 6b0ca92f80c..00000000000 --- a/src/current/_includes/v21.2/computed-columns/jsonb.md +++ /dev/null @@ -1,70 +0,0 @@ -In this example, create a table with a `JSONB` column and a stored computed column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE student_profiles ( - id STRING PRIMARY KEY AS (profile->>'id') STORED, - profile JSONB -); -~~~ - -Create a compute column after you create a table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE student_profiles ADD COLUMN age INT AS ( (profile->>'age')::INT) STORED; -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO student_profiles (profile) VALUES - ('{"id": "d78236", "name": "Arthur Read", "age": "16", "school": "PVPHS", "credits": 120, "sports": "none"}'), - ('{"name": "Buster Bunny", "age": "15", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'), - ('{"name": "Ernie Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM student_profiles; -~~~ -~~~ -+--------+---------------------------------------------------------------------------------------------------------------------+------+ -| id | profile | age | ----------+---------------------------------------------------------------------------------------------------------------------+------+ -| d78236 | {"age": "16", "credits": 120, "id": "d78236", "name": "Arthur Read", "school": "PVPHS", "sports": "none"} | 16 | -| f98112 | {"age": "15", "clubs": "MUN", "credits": 67, "id": "f98112", "name": "Buster Bunny", "school": "THS"} | 15 | -| t63512 | {"clubs": "Chess", "id": "t63512", "name": "Ernie Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} | NULL | -+--------+---------------------------------------------------------------------------------------------------------------------+------| -~~~ - -The primary key `id` is computed as a field from the `profile` column. Additionally the `age` column is computed from the profile column data as well. - -This example shows how add a stored computed column with a [coerced type](scalar-expressions.html#explicit-type-coercions): - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE json_data ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - json_info JSONB -); -INSERT INTO json_data (json_info) VALUES ('{"amount": "123.45"}'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE json_data ADD COLUMN amount DECIMAL AS ((json_info->>'amount')::DECIMAL) STORED; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM json_data; -~~~ - -~~~ - id | json_info | amount ----------------------------------------+----------------------+--------- - e7c3d706-1367-4d77-bfb4-386dfdeb10f9 | {"amount": "123.45"} | 123.45 -(1 row) -~~~ diff --git a/src/current/_includes/v21.2/computed-columns/secondary-index.md b/src/current/_includes/v21.2/computed-columns/secondary-index.md deleted file mode 100644 index 8b78325e695..00000000000 --- a/src/current/_includes/v21.2/computed-columns/secondary-index.md +++ /dev/null @@ -1,63 +0,0 @@ -In this example, create a table with a virtual computed column and an index on that column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE gymnastics ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - athlete STRING, - vault DECIMAL, - bars DECIMAL, - beam DECIMAL, - floor DECIMAL, - combined_score DECIMAL AS (vault + bars + beam + floor) VIRTUAL, - INDEX total (combined_score DESC) - ); -~~~ - -Then, insert a few rows a data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO gymnastics (athlete, vault, bars, beam, floor) VALUES - ('Simone Biles', 15.933, 14.800, 15.300, 15.800), - ('Gabby Douglas', 0, 15.766, 0, 0), - ('Laurie Hernandez', 15.100, 0, 15.233, 14.833), - ('Madison Kocian', 0, 15.933, 0, 0), - ('Aly Raisman', 15.833, 0, 15.000, 15.366); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM gymnastics; -~~~ -~~~ -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -| id | athlete | vault | bars | beam | floor | combined_score | -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -| 3fe11371-6a6a-49de-bbef-a8dd16560fac | Aly Raisman | 15.833 | 0 | 15.000 | 15.366 | 46.199 | -| 56055a70-b4c7-4522-909b-8f3674b705e5 | Madison Kocian | 0 | 15.933 | 0 | 0 | 15.933 | -| 69f73fd1-da34-48bf-aff8-71296ce4c2c7 | Gabby Douglas | 0 | 15.766 | 0 | 0 | 15.766 | -| 8a7b730b-668d-4845-8d25-48bda25114d6 | Laurie Hernandez | 15.100 | 0 | 15.233 | 14.833 | 45.166 | -| b2c5ca80-21c2-4853-9178-b96ce220ea4d | Simone Biles | 15.933 | 14.800 | 15.300 | 15.800 | 61.833 | -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -~~~ - -Now, run a query using the secondary index: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT athlete, combined_score FROM gymnastics ORDER BY combined_score DESC; -~~~ -~~~ -+------------------+----------------+ -| athlete | combined_score | -+------------------+----------------+ -| Simone Biles | 61.833 | -| Aly Raisman | 46.199 | -| Laurie Hernandez | 45.166 | -| Madison Kocian | 15.933 | -| Gabby Douglas | 15.766 | -+------------------+----------------+ -~~~ - -The athlete with the highest combined score of 61.833 is Simone Biles. diff --git a/src/current/_includes/v21.2/computed-columns/simple.md b/src/current/_includes/v21.2/computed-columns/simple.md deleted file mode 100644 index 24a86a59481..00000000000 --- a/src/current/_includes/v21.2/computed-columns/simple.md +++ /dev/null @@ -1,40 +0,0 @@ -In this example, let's create a simple table with a computed column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - city STRING, - first_name STRING, - last_name STRING, - full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED, - address STRING, - credit_card STRING, - dl STRING UNIQUE CHECK (LENGTH(dl) < 8) -); -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users (first_name, last_name) VALUES - ('Lola', 'McDog'), - ('Carl', 'Kimball'), - ('Ernie', 'Narayan'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ -~~~ - id | city | first_name | last_name | full_name | address | credit_card | dl -+--------------------------------------+------+------------+-----------+---------------+---------+-------------+------+ - 5740da29-cc0c-47af-921c-b275d21d4c76 | NULL | Ernie | Narayan | Ernie Narayan | NULL | NULL | NULL - e7e0b748-9194-4d71-9343-cd65218848f0 | NULL | Lola | McDog | Lola McDog | NULL | NULL | NULL - f00e4715-8ca7-4d5a-8de5-ef1d5d8092f3 | NULL | Carl | Kimball | Carl Kimball | NULL | NULL | NULL -(3 rows) -~~~ - -The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). diff --git a/src/current/_includes/v21.2/computed-columns/virtual.md b/src/current/_includes/v21.2/computed-columns/virtual.md deleted file mode 100644 index 7d873440328..00000000000 --- a/src/current/_includes/v21.2/computed-columns/virtual.md +++ /dev/null @@ -1,41 +0,0 @@ -In this example, create a table with a `JSONB` column and virtual computed columns: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE student_profiles ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - profile JSONB, - full_name STRING AS (concat_ws(' ',profile->>'firstName', profile->>'lastName')) VIRTUAL, - birthday TIMESTAMP AS (parse_timestamp(profile->>'birthdate')) VIRTUAL -); -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO student_profiles (profile) VALUES - ('{"id": "d78236", "firstName": "Arthur", "lastName": "Read", "birthdate": "2010-01-25", "school": "PVPHS", "credits": 120, "sports": "none"}'), - ('{"firstName": "Buster", "lastName": "Bunny", "birthdate": "2011-11-07", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'), - ('{"firstName": "Ernie", "lastName": "Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM student_profiles; -~~~ -~~~ - id | profile | full_name | birthday ----------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+---------------+---------------------- - 0e420282-105d-473b-83e2-3b082e7033e4 | {"birthdate": "2011-11-07", "clubs": "MUN", "credits": 67, "firstName": "Buster", "id": "f98112", "lastName": "Bunny", "school": "THS"} | Buster Bunny | 2011-11-07 00:00:00 - 6e9b77cd-ec67-41ae-b346-7b3d89902c72 | {"birthdate": "2010-01-25", "credits": 120, "firstName": "Arthur", "id": "d78236", "lastName": "Read", "school": "PVPHS", "sports": "none"} | Arthur Read | 2010-01-25 00:00:00 - f74b21e3-dc1e-49b7-a648-3c9b9024a70f | {"clubs": "Chess", "firstName": "Ernie", "id": "t63512", "lastName": "Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} | Ernie Narayan | NULL -(3 rows) - - -Time: 2ms total (execution 2ms / network 0ms) -~~~ - -The virtual column `full_name` is computed as a field from the `profile` column's data. The first name and last name are concatenated and separated by a single whitespace character using the [`concat_ws` string function](functions-and-operators.html#string-and-byte-functions). - -The virtual column `birthday` is parsed as a `TIMESTAMP` value from the `profile` column's `birthdate` string value. The [`parse_timestamp` function](functions-and-operators.html) is used to parse strings in `TIMESTAMP` format. diff --git a/src/current/_includes/v21.2/demo_movr.md b/src/current/_includes/v21.2/demo_movr.md deleted file mode 100644 index cde6c211213..00000000000 --- a/src/current/_includes/v21.2/demo_movr.md +++ /dev/null @@ -1,10 +0,0 @@ -Start the [MovR database](movr.html) on a 3-node CockroachDB demo cluster with a larger data set. - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach demo movr --num-histories 250000 --num-promo-codes 250000 --num-rides 125000 --num-users 12500 --num-vehicles 3750 --nodes 3 -~~~ - -{% comment %} -This is a test -{% endcomment %} diff --git a/src/current/_includes/v21.2/faq/auto-generate-unique-ids.html b/src/current/_includes/v21.2/faq/auto-generate-unique-ids.html deleted file mode 100644 index ee56e21b7e0..00000000000 --- a/src/current/_includes/v21.2/faq/auto-generate-unique-ids.html +++ /dev/null @@ -1,109 +0,0 @@ -To auto-generate unique row identifiers, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - id UUID NOT NULL DEFAULT gen_random_uuid(), - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - FAMILY "primary" (id, city, name, address, credit_card) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users (name, city) VALUES ('Petee', 'new york'), ('Eric', 'seattle'), ('Dan', 'seattle'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+-------+---------+-------------+ - cf8ee4e2-cd74-449a-b6e6-a0fb2017baa4 | new york | Petee | NULL | NULL - 2382564e-702f-42d9-a139-b6df535ae00a | seattle | Eric | NULL | NULL - 7d27e40b-263a-4891-b29b-d59135e55650 | seattle | Dan | NULL | NULL -(3 rows) -~~~ - -Alternatively, you can use the [`BYTES`](bytes.html) column with the `uuid_v4()` function as the default value instead: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users2 ( - id BYTES DEFAULT uuid_v4(), - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - FAMILY "primary" (id, city, name, address, credit_card) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users2 (name, city) VALUES ('Anna', 'new york'), ('Jonah', 'seattle'), ('Terry', 'chicago'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ - -~~~ - id | city | name | address | credit_card -+------------------------------------------------+----------+-------+---------+-------------+ - 4\244\277\323/\261M\007\213\275*\0060\346\025z | chicago | Terry | NULL | NULL - \273*t=u.F\010\274f/}\313\332\373a | new york | Anna | NULL | NULL - \004\\\364nP\024L)\252\364\222r$\274O0 | seattle | Jonah | NULL | NULL -(3 rows) -~~~ - -In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 512 MiB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load. - -This approach has the disadvantage of creating a primary key that may not be useful in a query directly, which can require a join with another table or a secondary index. - -If it is important for generated IDs to be stored in the same key-value range, you can use an [integer type](int.html) with the `unique_rowid()` [function](functions-and-operators.html#id-generation-functions) as the default value, either explicitly or via the [`SERIAL` pseudo-type](serial.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users3 ( - id INT DEFAULT unique_rowid(), - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - FAMILY "primary" (id, city, name, address, credit_card) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users3 (name, city) VALUES ('Blake', 'chicago'), ('Hannah', 'seattle'), ('Bobby', 'seattle'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users3; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------+---------+--------+---------+-------------+ - 469048192112197633 | chicago | Blake | NULL | NULL - 469048192112263169 | seattle | Hannah | NULL | NULL - 469048192112295937 | seattle | Bobby | NULL | NULL -(3 rows) -~~~ - -Upon insert or upsert, the `unique_rowid()` function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second. Also, there can be gaps and the order is not completely guaranteed. - -For further background on UUIDs, see [What is a UUID, and Why Should You Care?](https://www.cockroachlabs.com/blog/what-is-a-uuid/). diff --git a/src/current/_includes/v21.2/faq/clock-synchronization-effects.md b/src/current/_includes/v21.2/faq/clock-synchronization-effects.md deleted file mode 100644 index 6724f47d857..00000000000 --- a/src/current/_includes/v21.2/faq/clock-synchronization-effects.md +++ /dev/null @@ -1,27 +0,0 @@ -CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed, it spontaneously shuts down. This offset defaults to 500ms but can be changed via the [`--max-offset`](cockroach-start.html#flags-max-offset) flag when starting each node. - -While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -In very rare cases, CockroachDB can momentarily run with a stale clock. This can happen when using vMotion, which can suspend a VM running CockroachDB, migrate it to different hardware, and resume it. This will cause CockroachDB to be out of sync for a short period before it jumps to the correct time. During this window, it would be possible for a client to read stale data and write data derived from stale reads. By enabling the `server.clock.forward_jump_check_enabled` [cluster setting](cluster-settings.html), you can be alerted when the CockroachDB clock jumps forward, indicating it had been running with a stale clock. To protect against this on vMotion, however, use the [`--clock-device`](cockroach-start.html#general) flag to specify a [PTP hardware clock](https://www.kernel.org/doc/html/latest/driver-api/ptp.html) for CockroachDB to use when querying the current time. When doing so, you should not enable `server.clock.forward_jump_check_enabled` because forward jumps will be expected and harmless. For more information on how `--clock-device` interacts with vMotion, see [this blog post](https://core.vmware.com/blog/cockroachdb-vmotion-support-vsphere-7-using-precise-timekeeping). - -### Considerations - -When setting up clock synchronization: - -- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing). -- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should. -- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. -- Do not run more than one clock sync service on VMs where `cockroach` is running. -- {% include v21.2/misc/multiregion-max-offset.md %} - -### Tutorials - -For guidance on synchronizing clocks, see the tutorial for your deployment environment: - -Environment | Featured Approach -------------|--------------------- -[On-Premises](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service. -[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service. -[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service. -[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service. -[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service. diff --git a/src/current/_includes/v21.2/faq/clock-synchronization-monitoring.html b/src/current/_includes/v21.2/faq/clock-synchronization-monitoring.html deleted file mode 100644 index 7fb82e4d188..00000000000 --- a/src/current/_includes/v21.2/faq/clock-synchronization-monitoring.html +++ /dev/null @@ -1,8 +0,0 @@ -As explained in more detail [in our monitoring documentation](monitoring-and-alerting.html#prometheus-endpoint), each CockroachDB node exports a wide variety of metrics at `http://:/_status/vars` in the format used by the popular Prometheus timeseries database. Two of these metrics export how close each node's clock is to the clock of all other nodes: - -Metric | Definition --------|----------- -`clock_offset_meannanos` | The mean difference between the node's clock and other nodes' clocks in nanoseconds -`clock_offset_stddevnanos` | The standard deviation of the difference between the node's clock and other nodes' clocks in nanoseconds - -As described in [the above answer](#what-happens-when-node-clocks-are-not-properly-synchronized), a node will shut down if the mean offset of its clock from the other nodes' clocks exceeds 80% of the maximum offset allowed. It's recommended to monitor the `clock_offset_meannanos` metric and alert if it's approaching the 80% threshold of your cluster's configured max offset. diff --git a/src/current/_includes/v21.2/faq/differences-between-numberings.md b/src/current/_includes/v21.2/faq/differences-between-numberings.md deleted file mode 100644 index 741ec4f8066..00000000000 --- a/src/current/_includes/v21.2/faq/differences-between-numberings.md +++ /dev/null @@ -1,11 +0,0 @@ - -| Property | UUID generated with `uuid_v4()` | INT generated with `unique_rowid()` | Sequences | -|--------------------------------------|-----------------------------------------|-----------------------------------------------|--------------------------------| -| Size | 16 bytes | 8 bytes | 1 to 8 bytes | -| Ordering properties | Unordered | Highly time-ordered | Highly time-ordered | -| Performance cost at generation | Small, scalable | Small, scalable | Variable, can cause contention | -| Value distribution | Uniformly distributed (128 bits) | Contains time and space (node ID) components | Dense, small values | -| Data locality | Maximally distributed | Values generated close in time are co-located | Highly local | -| `INSERT` latency when used as key | Small, insensitive to concurrency | Small, but increases with concurrent INSERTs | Higher | -| `INSERT` throughput when used as key | Highest | Limited by max throughput on 1 node | Limited by max throughput on 1 node | -| Read throughput when used as key | Highest (maximal parallelism) | Limited | Limited | diff --git a/src/current/_includes/v21.2/faq/sequential-numbers.md b/src/current/_includes/v21.2/faq/sequential-numbers.md deleted file mode 100644 index 5b79c97566c..00000000000 --- a/src/current/_includes/v21.2/faq/sequential-numbers.md +++ /dev/null @@ -1,8 +0,0 @@ -Sequential numbers can be generated in CockroachDB using the `unique_rowid()` built-in function or using [SQL sequences](create-sequence.html). However, note the following considerations: - -- Unless you need roughly-ordered numbers, use [`UUID`](uuid.html) values instead. See the [previous -FAQ](#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for details. -- [Sequences](create-sequence.html) produce **unique** values. However, not all values are guaranteed to be produced (e.g., when a transaction is canceled after it consumes a value) and the values may be slightly reordered (e.g., when a transaction that -consumes a lower sequence number commits after a transaction that consumes a higher number). -- For maximum performance, avoid using sequences or `unique_rowid()` to generate row IDs or indexed columns. Values generated in these ways are logically close to each other and can cause contention on few data ranges during inserts. Instead, prefer [`UUID`](uuid.html) identifiers. -- {% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %} diff --git a/src/current/_includes/v21.2/faq/sequential-transactions.md b/src/current/_includes/v21.2/faq/sequential-transactions.md deleted file mode 100644 index 684f2ce5d2a..00000000000 --- a/src/current/_includes/v21.2/faq/sequential-transactions.md +++ /dev/null @@ -1,19 +0,0 @@ -Most use cases that ask for a strong time-based write ordering can be solved with other, more distribution-friendly -solutions instead. For example, CockroachDB's [time travel queries (`AS OF SYSTEM -TIME`)](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) support the following: - -- Paginating through all the changes to a table or dataset -- Determining the order of changes to data over time -- Determining the state of data at some point in the past -- Determining the changes to data between two points of time - -Consider also that the values generated by `unique_rowid()`, described in the previous FAQ entries, also provide an approximate time ordering. - -However, if your application absolutely requires strong time-based write ordering, it is possible to create a strictly monotonic counter in CockroachDB that increases over time as follows: - -- Initially: `CREATE TABLE cnt(val INT PRIMARY KEY); INSERT INTO cnt(val) VALUES(1);` -- In each transaction: `INSERT INTO cnt(val) SELECT max(val)+1 FROM cnt RETURNING val;` - -This will cause [`INSERT`](insert.html) transactions to conflict with each other and effectively force the transactions to commit one at a time throughout the cluster, which in turn guarantees the values generated in this way are strictly increasing over time without gaps. The caveat is that performance is severely limited as a result. - -If you find yourself interested in this problem, please [contact us](support-resources.html) and describe your situation. We would be glad to help you find alternative solutions and possibly extend CockroachDB to better match your needs. diff --git a/src/current/_includes/v21.2/faq/simulate-key-value-store.html b/src/current/_includes/v21.2/faq/simulate-key-value-store.html deleted file mode 100644 index 4772fa5358c..00000000000 --- a/src/current/_includes/v21.2/faq/simulate-key-value-store.html +++ /dev/null @@ -1,13 +0,0 @@ -CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key: - -~~~ sql -> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES); -~~~ - -When such a "simple" table has no indexes or foreign keys, [`INSERT`](insert.html)/[`UPSERT`](upsert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html) statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following `UPSERT` to add or replace a row in the table would translate into a single key-value Put operation: - -~~~ sql -> UPSERT INTO kv VALUES (1, b'hello') -~~~ - -This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises. diff --git a/src/current/_includes/v21.2/filter-tabs/crdb-kubernetes.md b/src/current/_includes/v21.2/filter-tabs/crdb-kubernetes.md deleted file mode 100644 index db7f18ff324..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/crdb-kubernetes.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "orchestrate-a-local-cluster-with-kubernetes.html;orchestrate-a-local-cluster-with-kubernetes-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/crdb-single-kubernetes.md b/src/current/_includes/v21.2/filter-tabs/crdb-single-kubernetes.md deleted file mode 100644 index 409bdc1855c..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/crdb-single-kubernetes.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-with-kubernetes.html;deploy-cockroachdb-with-kubernetes-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/crud-go.md b/src/current/_includes/v21.2/filter-tabs/crud-go.md deleted file mode 100644 index a69d0e4435c..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/crud-go.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use pgx;Use GORM;Use lib/pq;Use upper/db" %} -{% assign html_page_filenames = "build-a-go-app-with-cockroachdb.html;build-a-go-app-with-cockroachdb-gorm.html;build-a-go-app-with-cockroachdb-pq.html;build-a-go-app-with-cockroachdb-upperdb.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/crud-java.md b/src/current/_includes/v21.2/filter-tabs/crud-java.md deleted file mode 100644 index 5cbdf749e09..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/crud-java.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use JDBC;Use Hibernate;Use jOOQ;Use MyBatis-Spring" %} -{% assign html_page_filenames = "build-a-java-app-with-cockroachdb.html;build-a-java-app-with-cockroachdb-hibernate.html;build-a-java-app-with-cockroachdb-jooq.html;build-a-spring-app-with-cockroachdb-mybatis.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/crud-js.md b/src/current/_includes/v21.2/filter-tabs/crud-js.md deleted file mode 100644 index bb319ed88c1..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/crud-js.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use node-postgres;Use Sequelize;Use Knex.js;Use Prisma;Use TypeORM" %} -{% assign html_page_filenames = "build-a-nodejs-app-with-cockroachdb.html;build-a-nodejs-app-with-cockroachdb-sequelize.html;build-a-nodejs-app-with-cockroachdb-knexjs.html;build-a-nodejs-app-with-cockroachdb-prisma.html;build-a-typescript-app-with-cockroachdb.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/crud-python.md b/src/current/_includes/v21.2/filter-tabs/crud-python.md deleted file mode 100644 index cb4905591f0..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/crud-python.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use psycopg3;Use psycopg2;Use SQLAlchemy;Use Django;Use peewee" %} -{% assign html_page_filenames = "build-a-python-app-with-cockroachdb-psycopg3.html;build-a-python-app-with-cockroachdb.html;build-a-python-app-with-cockroachdb-sqlalchemy.html;build-a-python-app-with-cockroachdb-django.html;https://docs.peewee-orm.com/en/latest/peewee/playhouse.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/crud-ruby.md b/src/current/_includes/v21.2/filter-tabs/crud-ruby.md deleted file mode 100644 index 5fc13aa697b..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/crud-ruby.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use pg;Use ActiveRecord" %} -{% assign html_page_filenames = "build-a-ruby-app-with-cockroachdb.html;build-a-ruby-app-with-cockroachdb-activerecord.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/crud-spring.md b/src/current/_includes/v21.2/filter-tabs/crud-spring.md deleted file mode 100644 index bd4f66f19a7..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/crud-spring.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use JDBC;Use JPA" %} -{% assign html_page_filenames = "build-a-spring-app-with-cockroachdb-jdbc.html;build-a-spring-app-with-cockroachdb-jpa.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/deploy-crdb-aws.md b/src/current/_includes/v21.2/filter-tabs/deploy-crdb-aws.md deleted file mode 100644 index 706e5d85b8f..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/deploy-crdb-aws.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-on-aws.html;deploy-cockroachdb-on-aws-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/deploy-crdb-do.md b/src/current/_includes/v21.2/filter-tabs/deploy-crdb-do.md deleted file mode 100644 index 02e44afee30..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/deploy-crdb-do.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-on-digital-ocean.html;deploy-cockroachdb-on-digital-ocean-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/deploy-crdb-gce.md b/src/current/_includes/v21.2/filter-tabs/deploy-crdb-gce.md deleted file mode 100644 index 5799dfec9f0..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/deploy-crdb-gce.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-on-google-cloud-platform.html;deploy-cockroachdb-on-google-cloud-platform-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/deploy-crdb-ma.md b/src/current/_includes/v21.2/filter-tabs/deploy-crdb-ma.md deleted file mode 100644 index 3f1162b426c..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/deploy-crdb-ma.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-on-microsoft-azure.html;deploy-cockroachdb-on-microsoft-azure-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/deploy-crdb-op.md b/src/current/_includes/v21.2/filter-tabs/deploy-crdb-op.md deleted file mode 100644 index fdf35c61162..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/deploy-crdb-op.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "deploy-cockroachdb-on-premises.html;deploy-cockroachdb-on-premises-insecure.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/perf-bench-tpc-c.md b/src/current/_includes/v21.2/filter-tabs/perf-bench-tpc-c.md deleted file mode 100644 index 1394f916add..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/perf-bench-tpc-c.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Local;Local (Multi-Region);Small;Medium;Large" %} -{% assign html_page_filenames = "performance-benchmarking-with-tpcc-local.html;performance-benchmarking-with-tpcc-local-multiregion.html;performance-benchmarking-with-tpcc-small.html;performance-benchmarking-with-tpcc-medium.html;performance-benchmarking-with-tpcc-large.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/security-cert.md b/src/current/_includes/v21.2/filter-tabs/security-cert.md deleted file mode 100644 index 0832e618021..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/security-cert.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Use cockroach cert;Use OpenSSL;Use custom CA" %} -{% assign html_page_filenames = "cockroach-cert.html;create-security-certificates-openssl.html;create-security-certificates-custom-ca.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/filter-tabs/start-a-cluster.md b/src/current/_includes/v21.2/filter-tabs/start-a-cluster.md deleted file mode 100644 index 92a688078cb..00000000000 --- a/src/current/_includes/v21.2/filter-tabs/start-a-cluster.md +++ /dev/null @@ -1,4 +0,0 @@ -{% assign tab_names_html = "Secure;Insecure" %} -{% assign html_page_filenames = "secure-a-cluster.html;start-a-local-cluster.html" %} - -{% include filter-tabs.md tab_names=tab_names_html page_filenames=html_page_filenames page_folder=page.version.version %} diff --git a/src/current/_includes/v21.2/import-table-deprecate.md b/src/current/_includes/v21.2/import-table-deprecate.md deleted file mode 100644 index 715a37c6f4e..00000000000 --- a/src/current/_includes/v21.2/import-table-deprecate.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -As of v21.2, certain `IMPORT TABLE` statements that defined the table schema inline are **deprecated**. To import data into a new table, use [`CREATE TABLE`](create-table.html) followed by [`IMPORT INTO`](import-into.html). For an example, read [Import into a new table from a CSV file](import-into.html#import-into-a-new-table-from-a-csv-file). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/json/json-sample.go b/src/current/_includes/v21.2/json/json-sample.go deleted file mode 100644 index d5953a71ee2..00000000000 --- a/src/current/_includes/v21.2/json/json-sample.go +++ /dev/null @@ -1,79 +0,0 @@ -package main - -import ( - "database/sql" - "fmt" - "io/ioutil" - "net/http" - "time" - - _ "github.com/lib/pq" -) - -func main() { - db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257") - if err != nil { - panic(err) - } - - // The Reddit API wants us to tell it where to start from. The first request - // we just say "null" to say "from the start", subsequent requests will use - // the value received from the last call. - after := "null" - - for i := 0; i < 41; i++ { - after, err = makeReq(db, after) - if err != nil { - panic(err) - } - // Reddit limits to 30 requests per minute, so do not do any more than that. - time.Sleep(2 * time.Second) - } -} - -func makeReq(db *sql.DB, after string) (string, error) { - // First, make a request to reddit using the appropriate "after" string. - client := &http.Client{} - req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil) - - req.Header.Add("User-Agent", `Go`) - - resp, err := client.Do(req) - if err != nil { - return "", err - } - - res, err := ioutil.ReadAll(resp.Body) - if err != nil { - return "", err - } - - // We've gotten back our JSON from reddit, we can use a couple SQL tricks to - // accomplish multiple things at once. - // The JSON reddit returns looks like this: - // { - // "data": { - // "children": [ ... ] - // }, - // "after": ... - // } - // We structure our query so that we extract the `children` field, and then - // expand that and insert each individual element into the database as a - // separate row. We then return the "after" field so we know how to make the - // next request. - r, err := db.Query(` - INSERT INTO jsonb_test.programming (posts) - SELECT json_array_elements($1->'data'->'children') - RETURNING $1->'data'->'after'`, - string(res)) - if err != nil { - return "", err - } - - // Since we did a RETURNING, we need to grab the result of our query. - r.Next() - var newAfter string - r.Scan(&newAfter) - - return newAfter, nil -} diff --git a/src/current/_includes/v21.2/json/json-sample.py b/src/current/_includes/v21.2/json/json-sample.py deleted file mode 100644 index 49e302613e0..00000000000 --- a/src/current/_includes/v21.2/json/json-sample.py +++ /dev/null @@ -1,44 +0,0 @@ -import json -import psycopg2 -import requests -import time - -conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257) -conn.set_session(autocommit=True) -cur = conn.cursor() - -# The Reddit API wants us to tell it where to start from. The first request -# we just say "null" to say "from the start"; subsequent requests will use -# the value received from the last call. -url = "https://www.reddit.com/r/programming.json" -after = {"after": "null"} - -for n in range(41): - # First, make a request to reddit using the appropriate "after" string. - req = requests.get(url, params=after, headers={"User-Agent": "Python"}) - - # Decode the JSON and set "after" for the next request. - resp = req.json() - after = {"after": str(resp['data']['after'])} - - # Convert the JSON to a string to send to the database. - data = json.dumps(resp) - - # The JSON reddit returns looks like this: - # { - # "data": { - # "children": [ ... ] - # }, - # "after": ... - # } - # We structure our query so that we extract the `children` field, and then - # expand that and insert each individual element into the database as a - # separate row. - cur.execute("""INSERT INTO jsonb_test.programming (posts) - SELECT json_array_elements(%s->'data'->'children')""", (data,)) - - # Reddit limits to 30 requests per minute, so do not do any more than that. - time.sleep(2) - -cur.close() -conn.close() diff --git a/src/current/_includes/v21.2/known-limitations/cdc.md b/src/current/_includes/v21.2/known-limitations/cdc.md deleted file mode 100644 index 8dad2152588..00000000000 --- a/src/current/_includes/v21.2/known-limitations/cdc.md +++ /dev/null @@ -1,10 +0,0 @@ -- Changefeeds only work on tables with a single [column family](column-families.html) (which is the default for new tables). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/28667) -- Changefeeds cannot be [backed up](backup.html) or [restored](restore.html). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73434) -- Changefeeds cannot be altered. To alter, cancel the changefeed and [create a new one with updated settings from where it left off](create-changefeed.html#start-a-new-changefeed-where-another-ended). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/28668) -- Changefeed target options are limited to tables. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73435) -- Using a [cloud storage sink](changefeed-sinks.html#cloud-storage-sink) only works with `JSON` and emits [newline-delimited json](http://ndjson.org) files. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73432) -- Webhook sinks only support HTTPS. Use the [`insecure_tls_skip_verify`](create-changefeed.html#tls-skip-verify) parameter when testing to disable certificate verification; however, this still requires HTTPS and certificates. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73431) -- Currently, webhook sinks only have support for emitting `JSON`. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73432) -- There is no concurrency configurability for [webhook sinks](changefeed-sinks.html#webhook-sink). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73430) -- {{ site.data.products.enterprise }} changefeeds are currently disabled for [CockroachDB {{ site.data.products.serverless }} clusters](../cockroachcloud/quickstart.html). Core changefeeds are enabled. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73429) -- Changefeeds will emit [`NULL` values](null-handling.html) for [`VIRTUAL` computed columns](computed-columns.html) and not the column's computed value. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/74688) diff --git a/src/current/_includes/v21.2/known-limitations/copy-from-clients.md b/src/current/_includes/v21.2/known-limitations/copy-from-clients.md deleted file mode 100644 index 4428aaf74f7..00000000000 --- a/src/current/_includes/v21.2/known-limitations/copy-from-clients.md +++ /dev/null @@ -1,5 +0,0 @@ -The built-in SQL shell provided with CockroachDB ([`cockroach sql`](cockroach-sql.html) / [`cockroach demo`](cockroach-demo.html)) does not currently support importing data with the `COPY` statement. - -To load data into CockroachDB, we recommend that you use an [`IMPORT`](import.html). If you must use a `COPY` statement, you can issue the statement from the [`psql` client](https://www.postgresql.org/docs/current/app-psql.html) command provided with PostgreSQL, or from another third-party client. - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/16392) \ No newline at end of file diff --git a/src/current/_includes/v21.2/known-limitations/copy-syntax.md b/src/current/_includes/v21.2/known-limitations/copy-syntax.md deleted file mode 100644 index fb38157814f..00000000000 --- a/src/current/_includes/v21.2/known-limitations/copy-syntax.md +++ /dev/null @@ -1,9 +0,0 @@ -CockroachDB does not yet support the following `COPY` syntax: - -- `COPY ... TO`. To copy data from a CockroachDB cluster to a file, use an [`EXPORT`](export.html) statement. - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/41608) - -- `COPY ... FROM ... WHERE ` - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/54580) \ No newline at end of file diff --git a/src/current/_includes/v21.2/known-limitations/drop-single-partition.md b/src/current/_includes/v21.2/known-limitations/drop-single-partition.md deleted file mode 100644 index 3d8166fdc04..00000000000 --- a/src/current/_includes/v21.2/known-limitations/drop-single-partition.md +++ /dev/null @@ -1 +0,0 @@ -CockroachDB does not currently support dropping a single partition from a table. In order to remove partitions, you can [repartition]({% unless page.name == "partitioning.md" %}partitioning.html{% endunless %}#repartition-a-table) the table. diff --git a/src/current/_includes/v21.2/known-limitations/drop-unique-index-from-create-table.md b/src/current/_includes/v21.2/known-limitations/drop-unique-index-from-create-table.md deleted file mode 100644 index 698a24c24ef..00000000000 --- a/src/current/_includes/v21.2/known-limitations/drop-unique-index-from-create-table.md +++ /dev/null @@ -1 +0,0 @@ -[`UNIQUE` indexes](create-index.html) created as part of a [`CREATE TABLE`](create-table.html) statement cannot be removed without using [`CASCADE`]({% unless page.name == "drop-index.md" %}drop-index.html{% endunless %}#remove-an-index-and-dependent-objects-with-cascade). Unique indexes created with [`CREATE INDEX`](create-index.html) do not have this limitation. diff --git a/src/current/_includes/v21.2/known-limitations/dropping-renaming-during-upgrade.md b/src/current/_includes/v21.2/known-limitations/dropping-renaming-during-upgrade.md deleted file mode 100644 index 38f7f9ddd87..00000000000 --- a/src/current/_includes/v21.2/known-limitations/dropping-renaming-during-upgrade.md +++ /dev/null @@ -1,10 +0,0 @@ -When upgrading from v20.1.x to v20.2.0, as soon as any node of the cluster has run v20.2.0, it is important to avoid dropping, renaming, or truncating tables, views, sequences, or databases on the v20.1 nodes. This is true even in cases where nodes were upgraded to v20.2.0 and then rolled back to v20.1. - -In this case, avoid running the following operations against v20.1 nodes: - -- [`DROP TABLE`](drop-table.html), [`TRUNCATE TABLE`](truncate.html), [`RENAME TABLE`](rename-table.html) -- [`DROP VIEW`](drop-view.html) -- [`DROP SEQUENCE`](drop-sequence.html), [`RENAME SEQUENCE`](rename-sequence.html) -- [`DROP DATABASE`](drop-database.html), [`RENAME DATABASE`](rename-database.html) - -Running any of these operations against v19.2 nodes will result in inconsistency between two internal tables, `system.namespace` and `system.namespace2`. This inconsistency will prevent you from being able to recreate the dropped or renamed objects; the returned error will be `ERROR: relation already exists`. In the case of a dropped or renamed database, [`SHOW DATABASES`](show-databases.html) will also return an error: `ERROR: internal error: "" is not a database`. diff --git a/src/current/_includes/v21.2/known-limitations/import-high-disk-contention.md b/src/current/_includes/v21.2/known-limitations/import-high-disk-contention.md deleted file mode 100644 index 0e016ecaac5..00000000000 --- a/src/current/_includes/v21.2/known-limitations/import-high-disk-contention.md +++ /dev/null @@ -1,6 +0,0 @@ -[`IMPORT`](import.html) can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting](cluster-settings.html) to a value below your max disk write speed. For example, to set it to 10MB/s, execute: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB'; -~~~ diff --git a/src/current/_includes/v21.2/known-limitations/old-multi-col-stats.md b/src/current/_includes/v21.2/known-limitations/old-multi-col-stats.md deleted file mode 100644 index 595be9c7209..00000000000 --- a/src/current/_includes/v21.2/known-limitations/old-multi-col-stats.md +++ /dev/null @@ -1,3 +0,0 @@ -When a column is dropped from a multi-column index, the {% if page.name == "cost-based-optimizer.md" %} optimizer {% else %} [optimizer](cost-based-optimizer.html) {% endif %} will not collect new statistics for the deleted column. However, the optimizer never deletes the old [multi-column statistics](create-statistics.html#create-statistics-on-multiple-columns). This can cause a buildup of statistics in `system.table_statistics` leading the optimizer to use stale statistics, which could result in sub-optimal plans. To workaround this issue and avoid these scenarios, explicitly [delete those statistics](create-statistics.html#delete-statistics) from the `system.table_statistics` table. - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67407) diff --git a/src/current/_includes/v21.2/known-limitations/partitioning-with-placeholders.md b/src/current/_includes/v21.2/known-limitations/partitioning-with-placeholders.md deleted file mode 100644 index b3c3345200d..00000000000 --- a/src/current/_includes/v21.2/known-limitations/partitioning-with-placeholders.md +++ /dev/null @@ -1 +0,0 @@ -When defining a [table partition](partitioning.html), either during table creation or table alteration, it is not possible to use placeholders in the `PARTITION BY` clause. diff --git a/src/current/_includes/v21.2/known-limitations/restore-multiregion-match.md b/src/current/_includes/v21.2/known-limitations/restore-multiregion-match.md deleted file mode 100644 index 6d0f6c989fc..00000000000 --- a/src/current/_includes/v21.2/known-limitations/restore-multiregion-match.md +++ /dev/null @@ -1,48 +0,0 @@ -[`REGIONAL BY TABLE`](multiregion-overview.html#regional-tables) and [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables) tables can be restored **only** if the regions of the backed-up table match those of the target database. All of the following must be true for `RESTORE` to be successful: - - * The [regions](multiregion-overview.html#database-regions) of the source database and the regions of the destination database have the same set of regions. - * The regions were added to each of the databases in the same order. - * The databases have the same [primary region](set-primary-region.html). - - The following example would be considered as having **mismatched** regions because the database regions were not added in the same order and the primary regions do not match. - - Running on the source database: - - ~~~ sql - ALTER DATABASE source_database SET PRIMARY REGION "us-east1"; - ~~~ - ~~~ sql - ALTER DATABASE source_database ADD region "us-west1"; - ~~~ - - Running on the destination database: - - ~~~ sql - ALTER DATABASE destination_database SET PRIMARY REGION "us-west1"; - ~~~ - ~~~ sql - ALTER DATABASE destination_database ADD region "us-east1"; - ~~~ - - In addition, the following scenario has mismatched regions between the databases since the regions were not added to the database in the same order. - - Running on the source database: - - ~~~ sql - ALTER DATABASE source_database SET PRIMARY REGION "us-east1"; - ~~~ - ~~~ sql - ALTER DATABASE source_database ADD region "us-west1"; - ~~~ - - Running on the destination database: - - ~~~ sql - ALTER DATABASE destination_database SET PRIMARY REGION "us-west1"; - ~~~ - ~~~ sql - ALTER DATABASE destination_database ADD region "us-east1"; - ~~~ - ~~~ sql - ALTER DATABASE destination_database SET PRIMARY REGION "us-east1"; - ~~~ diff --git a/src/current/_includes/v21.2/known-limitations/restore-tables-non-multi-reg.md b/src/current/_includes/v21.2/known-limitations/restore-tables-non-multi-reg.md deleted file mode 100644 index 45ce8db1924..00000000000 --- a/src/current/_includes/v21.2/known-limitations/restore-tables-non-multi-reg.md +++ /dev/null @@ -1 +0,0 @@ -Restoring [`GLOBAL`](multiregion-overview.html#global-tables) and [`REGIONAL BY TABLE`](multiregion-overview.html#regional-tables) tables into a **non**-multi-region database is not supported. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/71502) diff --git a/src/current/_includes/v21.2/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md b/src/current/_includes/v21.2/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md deleted file mode 100644 index 6a5445da71c..00000000000 --- a/src/current/_includes/v21.2/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md +++ /dev/null @@ -1,64 +0,0 @@ -Schema change [DDL](https://en.wikipedia.org/wiki/Data_definition_language#ALTER_statement) statements that run inside a multi-statement transaction with non-DDL statements can fail at [`COMMIT`](commit-transaction.html) time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded. - -If such a failure occurs, CockroachDB will emit a new CockroachDB-specific error code, `XXA00`, and the following error message: - -``` -transaction committed but schema change aborted with error: -HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed. -Manual inspection may be required to determine the actual state of the database. -``` - -{{site.data.alerts.callout_info}} -This limitation exists in versions of CockroachDB prior to 19.2. In these older versions, CockroachDB returned the PostgreSQL error code `40003`, `"statement completion unknown"`. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}} -If you must execute schema change DDL statements inside a multi-statement transaction, we **strongly recommend** checking for this error code and handling it appropriately every time you execute such transactions. -{{site.data.alerts.end}} - -This error will occur in various scenarios, including but not limited to: - -- Creating a unique index fails because values aren't unique. -- The evaluation of a computed value fails. -- Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column. - -To see an example of this error, start by creating the following table. - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE T(x INT); -INSERT INTO T(x) VALUES (1), (2), (3); -~~~ - -Then, enter the following multi-statement transaction, which will trigger the error. - -{% include_cached copy-clipboard.html %} -~~~ sql -BEGIN; -ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x); -INSERT INTO T(x) VALUES (3); -COMMIT; -~~~ - -~~~ -pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x" -HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed. -Manual inspection may be required to determine the actual state of the database. -~~~ - -In this example, the [`INSERT`](insert.html) statement committed, but the [`ALTER TABLE`](alter-table.html) statement adding a [`UNIQUE` constraint](unique.html) failed. We can verify this by looking at the data in table `t` and seeing that the additional non-unique value `3` was successfully inserted. - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM t; -~~~ - -~~~ - x -+---+ - 1 - 2 - 3 - 3 -(4 rows) -~~~ diff --git a/src/current/_includes/v21.2/known-limitations/schema-changes-between-prepared-statements.md b/src/current/_includes/v21.2/known-limitations/schema-changes-between-prepared-statements.md deleted file mode 100644 index 736fe99df61..00000000000 --- a/src/current/_includes/v21.2/known-limitations/schema-changes-between-prepared-statements.md +++ /dev/null @@ -1,33 +0,0 @@ -When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a `SELECT *` clause will result in an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE users (id INT PRIMARY KEY); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -PREPARE prep1 AS SELECT * FROM users; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE users ADD COLUMN name STRING; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO users VALUES (1, 'Max Roach'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -EXECUTE prep1; -~~~ - -~~~ -ERROR: cached plan must not change result type -SQLSTATE: 0A000 -~~~ - -It's therefore recommended to explicitly list result columns instead of using `SELECT *` in prepared statements, when possible. diff --git a/src/current/_includes/v21.2/known-limitations/schema-changes-within-transactions.md b/src/current/_includes/v21.2/known-limitations/schema-changes-within-transactions.md deleted file mode 100644 index 1fcf7158e24..00000000000 --- a/src/current/_includes/v21.2/known-limitations/schema-changes-within-transactions.md +++ /dev/null @@ -1,13 +0,0 @@ -Within a single [transaction](transactions.html): - -- DDL statements cannot be mixed with DML statements. As a workaround, you can split the statements into separate transactions. For more details, [see examples of unsupported statements](online-schema-changes.html#examples-of-statements-that-fail). -- As of version v2.1, you can run schema changes inside the same transaction as a [`CREATE TABLE`](create-table.html) statement. For more information, [see this example](online-schema-changes.html#run-schema-changes-inside-a-transaction-with-create-table). -- A `CREATE TABLE` statement containing [`FOREIGN KEY`](foreign-key.html) clauses cannot be followed by statements that reference the new table. -- Database, schema, table, and user-defined type names cannot be reused. For example, you cannot drop a table named `a` and then create (or rename) a different table with the name `a`. Similarly, you cannot rename a database named `a` to `b` and then create (or rename) a different database with the name `a`. As a workaround, split `RENAME TO`, `DROP`, and `CREATE` statements that reuse object names into separate transactions. -- [Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed](#schema-change-ddl-statements-inside-a-multi-statement-transaction-can-fail-while-other-statements-succeed). -- As of v19.1, some schema changes can be used in combination in a single `ALTER TABLE` statement. For a list of commands that can be combined, see [`ALTER TABLE`](alter-table.html). For a demonstration, see [Add and rename columns atomically](rename-column.html#add-and-rename-columns-atomically). -- [`DROP COLUMN`](drop-column.html) can result in data loss if one of the other schema changes in the transaction fails or is canceled. To work around this, move the `DROP COLUMN` statement to its own explicit transaction or run it in a single statement outside the existing transaction. - -{{site.data.alerts.callout_info}} -If a schema change within a transaction fails, manual intervention may be needed to determine which has failed. After determining which schema change(s) failed, you can then retry the schema changes. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/known-limitations/set-transaction-no-rollback.md b/src/current/_includes/v21.2/known-limitations/set-transaction-no-rollback.md deleted file mode 100644 index 4ab3661f4f7..00000000000 --- a/src/current/_includes/v21.2/known-limitations/set-transaction-no-rollback.md +++ /dev/null @@ -1,17 +0,0 @@ -{% if page.name == "set-vars.md" %} `SET` {% else %} [`SET`](set-vars.html) {% endif %} does not properly apply [`ROLLBACK`](rollback-transaction.html) within a transaction. For example, in the following transaction, showing the `TIME ZONE` [variable](set-vars.html#supported-variables) does not return `2` as expected after the rollback: - -~~~sql -SET TIME ZONE +2; -BEGIN; -SET TIME ZONE +3; -ROLLBACK; -SHOW TIME ZONE; -~~~ - -~~~sql -timezone ------------- -3 -~~~ - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/69396) diff --git a/src/current/_includes/v21.2/known-limitations/single-col-stats-deletion.md b/src/current/_includes/v21.2/known-limitations/single-col-stats-deletion.md deleted file mode 100644 index b8baa46c5d2..00000000000 --- a/src/current/_includes/v21.2/known-limitations/single-col-stats-deletion.md +++ /dev/null @@ -1,3 +0,0 @@ -[Single-column statistics](create-statistics.html#create-statistics-on-a-single-column) are not deleted when columns are dropped, which could cause minor performance issues. - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67407) diff --git a/src/current/_includes/v21.2/known-limitations/stats-refresh-upgrade.md b/src/current/_includes/v21.2/known-limitations/stats-refresh-upgrade.md deleted file mode 100644 index f54a08b3754..00000000000 --- a/src/current/_includes/v21.2/known-limitations/stats-refresh-upgrade.md +++ /dev/null @@ -1,3 +0,0 @@ -The [automatic statistics refresher](cost-based-optimizer.html#control-statistics-refresh-rate) automatically checks whether it needs to refresh statistics for every table in the database upon startup of each node in the cluster. If statistics for a table have not been refreshed in a while, this will trigger collection of statistics for that table. If statistics have been refreshed recently, it will not force a refresh. As a result, the automatic statistics refresher does not necessarily perform a refresh of statistics after an [upgrade](upgrade-cockroach-version.html). This could cause a problem, for example, if the upgrade moves from a version without [histograms](cost-based-optimizer.html#control-histogram-collection) to a version with histograms. To refresh statistics manually, use [`CREATE STATISTICS`](create-statistics.html). - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/54816) diff --git a/src/current/_includes/v21.2/known-limitations/userfile-upload-non-recursive.md b/src/current/_includes/v21.2/known-limitations/userfile-upload-non-recursive.md deleted file mode 100644 index d873b1f5e33..00000000000 --- a/src/current/_includes/v21.2/known-limitations/userfile-upload-non-recursive.md +++ /dev/null @@ -1 +0,0 @@ -- `cockroach userfile upload` does not currently allow for recursive uploads from a directory. This feature will be present with the `--recursive` flag in future versions. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/pull/65307) diff --git a/src/current/_includes/v21.2/metric-names.md b/src/current/_includes/v21.2/metric-names.md deleted file mode 100644 index 477afe28a1b..00000000000 --- a/src/current/_includes/v21.2/metric-names.md +++ /dev/null @@ -1,252 +0,0 @@ -Name | Help ------|----- -`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas) -`addsstable.copies` | Number of SSTable ingestions that required copying files during application -`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders) -`build.timestamp` | Build information -`capacity.available` | Available storage capacity -`capacity.reserved` | Capacity reserved for snapshots -`capacity.used` | Used storage capacity -`capacity` | Total storage capacity -`changefeed.failures` | Total number of changefeed jobs which have failed -`changefeed.running` | Number of currently running changefeeds, including sinkless -`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds -`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds -`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges -`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine -`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine -`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions -`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue -`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted -`distsender.batches.partial` | Number of partial batches processed -`distsender.batches` | Number of batches processed -`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered -`distsender.rpc.sent.local` | Number of local RPCs sent -`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors -`distsender.rpc.sent` | Number of RPCs sent -`exec.error` | Number of batch KV requests that failed to execute on this node -`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node -`exec.success` | Number of batch KV requests executed successfully on this node -`gcbytesage` | Cumulative age of non-live data in seconds -`gossip.bytes.received` | Number of received gossip bytes -`gossip.bytes.sent` | Number of sent gossip bytes -`gossip.connections.incoming` | Number of active incoming gossip connections -`gossip.connections.outgoing` | Number of active outgoing gossip connections -`gossip.connections.refused` | Number of refused incoming gossip connections -`gossip.infos.received` | Number of received gossip Info objects -`gossip.infos.sent` | Number of sent gossip Info objects -`intentage` | Cumulative age of intents in seconds -`intentbytes` | Number of bytes in intent KV pairs -`intentcount` | Count of intent keys -`keybytes` | Number of bytes taken up by keys -`keycount` | Count of all keys -`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated -`leases.epoch` | Number of replica leaseholders using epoch-based leases -`leases.error` | Number of failed lease requests -`leases.expiration` | Number of replica leaseholders using expiration-based leases -`leases.success` | Number of successful lease requests -`leases.transfers.error` | Number of failed lease transfers -`leases.transfers.success` | Number of successful lease transfers -`livebytes` | Number of bytes of live data (keys plus values), including unreplicated data -`livecount` | Count of live keys -`liveness.epochincrements` | Number of times this node has incremented its liveness epoch -`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node -`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds -`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node -`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live) -`node-id` | node ID with labels for advertised RPC and HTTP addresses -`queue.consistency.pending` | Number of pending replicas in the consistency checker queue -`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue -`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue -`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue -`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal -`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal -`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine -`queue.gc.info.intentsconsidered` | Number of 'old' intents -`queue.gc.info.intenttxns` | Number of associated distinct transactions -`queue.gc.info.numkeysaffected` | Number of keys with GC'able data -`queue.gc.info.pushtxn` | Number of attempted pushes -`queue.gc.info.resolvesuccess` | Number of successful intent resolutions -`queue.gc.info.resolvetotal` | Number of attempted intent resolutions -`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns -`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns -`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns -`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine -`queue.gc.pending` | Number of pending replicas in the GC queue -`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue -`queue.gc.process.success` | Number of replicas successfully processed by the GC queue -`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue -`queue.raftlog.pending` | Number of pending replicas in the Raft log queue -`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue -`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue -`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue -`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue -`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue -`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue -`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue -`queue.replicagc.pending` | Number of pending replicas in the replica GC queue -`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue -`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue -`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue -`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue -`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue -`queue.replicate.pending` | Number of pending replicas in the replicate queue -`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue -`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue -`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue -`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options -`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue -`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage) -`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition) -`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue -`queue.split.pending` | Number of pending replicas in the split queue -`queue.split.process.failure` | Number of replicas which failed processing in the split queue -`queue.split.process.success` | Number of replicas successfully processed by the split queue -`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue -`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue -`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue -`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue -`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue -`raft.commandsapplied` | Count of Raft commands applied -`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue -`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced -`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands -`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries -`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick() -`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working -`raft.rcvd.app` | Number of MsgApp messages received by this store -`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store -`raft.rcvd.dropped` | Number of dropped incoming Raft messages -`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store -`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store -`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store -`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store -`raft.rcvd.prop` | Number of MsgProp messages received by this store -`raft.rcvd.snap` | Number of MsgSnap messages received by this store -`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store -`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store -`raft.rcvd.vote` | Number of MsgVote messages received by this store -`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store -`raft.ticks` | Number of Raft ticks queued -`raftlog.behind` | Number of Raft log entries followers on other stores are behind -`raftlog.truncated` | Number of Raft log entries truncated -`range.adds` | Number of range additions -`range.raftleadertransfers` | Number of Raft leader transfers -`range.removes` | Number of range removals -`range.snapshots.generated` | Number of generated snapshots -`range.snapshots.normal-applied` | Number of applied snapshots -`range.snapshots.preemptive-applied` | Number of applied preemptive snapshots -`range.snapshots.rcvd-bytes` | Number of snapshot bytes received -`range.snapshots.sent-bytes` | Number of snapshot bytes sent -`range.splits` | Number of range splits -`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum -`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target -`ranges` | Number of ranges -`rebalancing.writespersecond` | Number of keys written (i.e., applied by Raft) per second to the store, averaged over a large time period as used in rebalancing decisions -`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined -`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined -`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined -`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue -`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue -`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue -`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree -`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue -`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store -`replicas.leaders` | Number of Raft leaders -`replicas.leaseholders` | Number of lease holders -`replicas.quiescent` | Number of quiesced replicas -`replicas.reserved` | Number of replicas reserved for snapshots -`replicas` | Number of replicas -`requests.backpressure.split` | Number of backpressured writes waiting on a Range split -`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue -`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender -`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease -`requests.slow.raft` | Number of requests that have been stuck for a long time in Raft -`rocksdb.block.cache.hits` | Count of block cache hits -`rocksdb.block.cache.misses` | Count of block cache misses -`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache -`rocksdb.block.cache.usage` | Bytes used by the block cache -`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked -`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation -`rocksdb.compactions` | Number of table compactions -`rocksdb.flushes` | Number of table flushes -`rocksdb.memtable.total-size` | Current size of memtable in bytes -`rocksdb.num-sstables` | Number of storage engine SSTables -`rocksdb.read-amplification` | Number of disk reads per query -`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks -`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds -`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error. -`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error. -`sql.bytesin` | Number of sql bytes received -`sql.bytesout` | Number of sql bytes sent -`sql.conns` | Number of active sql connections -`sql.ddl.count` | Number of SQL DDL statements -`sql.delete.count` | Number of SQL DELETE statements -`sql.distsql.exec.latency` | Latency in nanoseconds of SQL statement executions running on the distributed execution engine. This metric does not include the time to parse and plan the statement. -`sql.distsql.flows.active` | Number of distributed SQL flows currently active -`sql.distsql.flows.total` | Number of distributed SQL flows executed -`sql.distsql.queries.active` | Number of distributed SQL queries currently active -`sql.distsql.queries.total` | Number of distributed SQL queries executed -`sql.distsql.select.count` | Number of DistSQL SELECT statements -`sql.distsql.service.latency` | Latency in nanoseconds of SQL statement executions running on the distributed execution engine, including the time to parse and plan the statement. -`sql.exec.latency` | Latency in nanoseconds of all SQL statement executions. This metric does not include the time to parse and plan the statement. -`sql.guardrails.max_row_size_err.count` | Number of times a large row violates the corresponding `sql.guardrails.max_row_size_err` limit. -`sql.guardrails.max_row_size_log.count` | Number of times a large row violates the corresponding `sql.guardrails.max_row_size_log` limit. -`sql.insert.count` | Number of SQL INSERT statements -`sql.mem.current` | Current sql statement memory usage -`sql.mem.distsql.current` | Current sql statement memory usage for distsql -`sql.mem.distsql.max` | Memory usage per sql statement for distsql -`sql.mem.max` | Memory usage per sql statement -`sql.mem.session.current` | Current sql session memory usage -`sql.mem.session.max` | Memory usage per sql session -`sql.mem.txn.current` | Current sql transaction memory usage -`sql.mem.txn.max` | Memory usage per sql transaction -`sql.misc.count` | Number of other SQL statements -`sql.query.count` | Number of SQL queries -`sql.select.count` | Number of SQL SELECT statements -`sql.service.latency` | Latency in nanoseconds of SQL request execution, including the time to parse and plan the statement. -`sql.txn.abort.count` | Number of SQL transaction ABORT statements -`sql.txn.begin.count` | Number of SQL transaction BEGIN statements -`sql.txn.commit.count` | Number of SQL transaction COMMIT statements -`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements -`sql.update.count` | Number of SQL UPDATE statements -`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo -`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released -`sys.cgocalls` | Total number of cgo call -`sys.cpu.sys.ns` | Total system cpu time in nanoseconds -`sys.cpu.sys.percent` | Current system cpu percentage -`sys.cpu.user.ns` | Total user cpu time in nanoseconds -`sys.cpu.user.percent` | Current user cpu percentage -`sys.fd.open` | Process open file descriptors -`sys.fd.softlimit` | Process open FD soft limit -`sys.gc.count` | Total number of GC runs -`sys.gc.pause.ns` | Total GC pause in nanoseconds -`sys.gc.pause.percent` | Current GC pause percentage -`sys.go.allocbytes` | Current bytes of memory allocated by go -`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released -`sys.goroutines` | Current number of goroutines -`sys.rss` | Current process RSS -`sys.uptime` | Process uptime in seconds -`sysbytes` | Number of bytes in system KV pairs -`syscount` | Count of system KV pairs -`timeseries.write.bytes` | Total size in bytes of metric samples written to disk -`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk -`timeseries.write.samples` | Total number of metric samples written to disk -`totalbytes` | Total number of bytes taken up by keys and values including non-live data -`tscache.skl.read.pages` | Number of pages in the read timestamp cache -`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache -`tscache.skl.write.pages` | Number of pages in the write timestamp cache -`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache -`txn.abandons` | Number of abandoned KV transactions -`txn.aborts` | Number of aborted KV transactions -`txn.autoretries` | Number of automatic retries to avoid serializable restarts -`txn.commits1PC` | Number of committed one-phase KV transactions -`txn.commits` | Number of committed KV transactions (including 1PC) -`txn.durations` | KV transaction durations in nanoseconds -`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command -`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer -`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE -`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first -`txn.restarts` | Number of restarted KV transactions -`valbytes` | Number of bytes taken up by values -`valcount` | Count of all values diff --git a/src/current/_includes/v21.2/misc/available-capacity-metric.md b/src/current/_includes/v21.2/misc/available-capacity-metric.md deleted file mode 100644 index 61dbcb9cbf2..00000000000 --- a/src/current/_includes/v21.2/misc/available-capacity-metric.md +++ /dev/null @@ -1 +0,0 @@ -If you are testing your deployment locally with multiple CockroachDB nodes running on a single machine (this is [not recommended in production](recommended-production-settings.html#topology)), you must explicitly [set the store size](cockroach-start.html#store) per node in order to display the correct capacity. Otherwise, the machine's actual disk capacity will be counted as a separate store for each node, thus inflating the computed capacity. \ No newline at end of file diff --git a/src/current/_includes/v21.2/misc/aws-locations.md b/src/current/_includes/v21.2/misc/aws-locations.md deleted file mode 100644 index 8b073c1f230..00000000000 --- a/src/current/_includes/v21.2/misc/aws-locations.md +++ /dev/null @@ -1,18 +0,0 @@ -| Location | SQL Statement | -| ------ | ------ | -| US East (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east-1', 37.478397, -76.453077)`| -| US East (Ohio) | `INSERT into system.locations VALUES ('region', 'us-east-2', 40.417287, -76.453077)` | -| US West (N. California) | `INSERT into system.locations VALUES ('region', 'us-west-1', 38.837522, -120.895824)` | -| US West (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west-2', 43.804133, -120.554201)` | -| Canada (Central) | `INSERT into system.locations VALUES ('region', 'ca-central-1', 56.130366, -106.346771)` | -| EU (Frankfurt) | `INSERT into system.locations VALUES ('region', 'eu-central-1', 50.110922, 8.682127)` | -| EU (Ireland) | `INSERT into system.locations VALUES ('region', 'eu-west-1', 53.142367, -7.692054)` | -| EU (London) | `INSERT into system.locations VALUES ('region', 'eu-west-2', 51.507351, -0.127758)` | -| EU (Paris) | `INSERT into system.locations VALUES ('region', 'eu-west-3', 48.856614, 2.352222)` | -| Asia Pacific (Tokyo) | `INSERT into system.locations VALUES ('region', 'ap-northeast-1', 35.689487, 139.691706)` | -| Asia Pacific (Seoul) | `INSERT into system.locations VALUES ('region', 'ap-northeast-2', 37.566535, 126.977969)` | -| Asia Pacific (Osaka-Local) | `INSERT into system.locations VALUES ('region', 'ap-northeast-3', 34.693738, 135.502165)` | -| Asia Pacific (Singapore) | `INSERT into system.locations VALUES ('region', 'ap-southeast-1', 1.352083, 103.819836)` | -| Asia Pacific (Sydney) | `INSERT into system.locations VALUES ('region', 'ap-southeast-2', -33.86882, 151.209296)` | -| Asia Pacific (Mumbai) | `INSERT into system.locations VALUES ('region', 'ap-south-1', 19.075984, 72.877656)` | -| South America (São Paulo) | `INSERT into system.locations VALUES ('region', 'sa-east-1', -23.55052, -46.633309)` | diff --git a/src/current/_includes/v21.2/misc/azure-env-param.md b/src/current/_includes/v21.2/misc/azure-env-param.md deleted file mode 100644 index 29b5cb04f2d..00000000000 --- a/src/current/_includes/v21.2/misc/azure-env-param.md +++ /dev/null @@ -1 +0,0 @@ -The [Azure environment](https://learn.microsoft.com/en-us/azure/deployment-environments/concept-environments-key-concepts#environments) that the storage account belongs to. The accepted values are: `AZURECHINACLOUD`, `AZUREGERMANCLOUD`, `AZUREPUBLICCLOUD`, and [`AZUREUSGOVERNMENTCLOUD`](https://learn.microsoft.com/en-us/azure/azure-government/documentation-government-developer-guide). These are cloud environments that meet security, compliance, and data privacy requirements for the respective instance of Azure cloud. If the parameter is not specified, it will default to `AZUREPUBLICCLOUD`. \ No newline at end of file diff --git a/src/current/_includes/v21.2/misc/azure-locations.md b/src/current/_includes/v21.2/misc/azure-locations.md deleted file mode 100644 index 7119ff8b7cb..00000000000 --- a/src/current/_includes/v21.2/misc/azure-locations.md +++ /dev/null @@ -1,30 +0,0 @@ -| Location | SQL Statement | -| -------- | ------------- | -| eastasia (East Asia) | `INSERT into system.locations VALUES ('region', 'eastasia', 22.267, 114.188)` | -| southeastasia (Southeast Asia) | `INSERT into system.locations VALUES ('region', 'southeastasia', 1.283, 103.833)` | -| centralus (Central US) | `INSERT into system.locations VALUES ('region', 'centralus', 41.5908, -93.6208)` | -| eastus (East US) | `INSERT into system.locations VALUES ('region', 'eastus', 37.3719, -79.8164)` | -| eastus2 (East US 2) | `INSERT into system.locations VALUES ('region', 'eastus2', 36.6681, -78.3889)` | -| westus (West US) | `INSERT into system.locations VALUES ('region', 'westus', 37.783, -122.417)` | -| northcentralus (North Central US) | `INSERT into system.locations VALUES ('region', 'northcentralus', 41.8819, -87.6278)` | -| southcentralus (South Central US) | `INSERT into system.locations VALUES ('region', 'southcentralus', 29.4167, -98.5)` | -| northeurope (North Europe) | `INSERT into system.locations VALUES ('region', 'northeurope', 53.3478, -6.2597)` | -| westeurope (West Europe) | `INSERT into system.locations VALUES ('region', 'westeurope', 52.3667, 4.9)` | -| japanwest (Japan West) | `INSERT into system.locations VALUES ('region', 'japanwest', 34.6939, 135.5022)` | -| japaneast (Japan East) | `INSERT into system.locations VALUES ('region', 'japaneast', 35.68, 139.77)` | -| brazilsouth (Brazil South) | `INSERT into system.locations VALUES ('region', 'brazilsouth', -23.55, -46.633)` | -| australiaeast (Australia East) | `INSERT into system.locations VALUES ('region', 'australiaeast', -33.86, 151.2094)` | -| australiasoutheast (Australia Southeast) | `INSERT into system.locations VALUES ('region', 'australiasoutheast', -37.8136, 144.9631)` | -| southindia (South India) | `INSERT into system.locations VALUES ('region', 'southindia', 12.9822, 80.1636)` | -| centralindia (Central India) | `INSERT into system.locations VALUES ('region', 'centralindia', 18.5822, 73.9197)` | -| westindia (West India) | `INSERT into system.locations VALUES ('region', 'westindia', 19.088, 72.868)` | -| canadacentral (Canada Central) | `INSERT into system.locations VALUES ('region', 'canadacentral', 43.653, -79.383)` | -| canadaeast (Canada East) | `INSERT into system.locations VALUES ('region', 'canadaeast', 46.817, -71.217)` | -| uksouth (UK South) | `INSERT into system.locations VALUES ('region', 'uksouth', 50.941, -0.799)` | -| ukwest (UK West) | `INSERT into system.locations VALUES ('region', 'ukwest', 53.427, -3.084)` | -| westcentralus (West Central US) | `INSERT into system.locations VALUES ('region', 'westcentralus', 40.890, -110.234)` | -| westus2 (West US 2) | `INSERT into system.locations VALUES ('region', 'westus2', 47.233, -119.852)` | -| koreacentral (Korea Central) | `INSERT into system.locations VALUES ('region', 'koreacentral', 37.5665, 126.9780)` | -| koreasouth (Korea South) | `INSERT into system.locations VALUES ('region', 'koreasouth', 35.1796, 129.0756)` | -| francecentral (France Central) | `INSERT into system.locations VALUES ('region', 'francecentral', 46.3772, 2.3730)` | -| francesouth (France South) | `INSERT into system.locations VALUES ('region', 'francesouth', 43.8345, 2.1972)` | diff --git a/src/current/_includes/v21.2/misc/basic-terms.md b/src/current/_includes/v21.2/misc/basic-terms.md deleted file mode 100644 index 231e29af81f..00000000000 --- a/src/current/_includes/v21.2/misc/basic-terms.md +++ /dev/null @@ -1,12 +0,0 @@ -## CockroachDB architecture terms - -Term | Definition ------|------------ -**cluster** | A group of interconnected storage nodes that collaboratively organize transactions, fault tolerance, and data rebalancing. -**node** | An individual instance of CockroachDB. One or more nodes form a cluster. -**range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a sorted map of key-value pairs. This keyspace is divided into contiguous chunks called _ranges_, such that every key is found in one range.

      From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the _primary index_ because the table is sorted by the primary key) or a single row in a secondary index. As soon as the size of a range reaches 512 MiB ([the default](../configure-replication-zones.html#range-max-bytes)), it is split into two ranges. This process continues for these new ranges as the table and its indexes continue growing. -**replica** | A copy of a range stored on a node. By default, there are three [replicas](../configure-replication-zones.html#num_replicas) of each range on different nodes. -**leaseholder** | The replica that holds the "range lease." This replica receives and coordinates all read and write requests for the range.

      For most types of tables and queries, the leaseholder is the only replica that can serve consistent reads (reads that return "the latest" data). -**Raft protocol** | The [consensus protocol](replication-layer.html#raft) employed in CockroachDB that ensures that your data is safely stored on multiple nodes and that those nodes agree on the current state even if some of them are temporarily disconnected. -**Raft leader** | For each range, the replica that is the "leader" for write requests. The leader uses the Raft protocol to ensure that a majority of replicas (the leader and enough followers) agree, based on their Raft logs, before committing the write. The Raft leader is almost always the same replica as the leaseholder. -**Raft log** | A time-ordered log of writes to a range that its replicas have agreed on. This log exists on-disk with each replica and is the range's source of truth for consistent replication. diff --git a/src/current/_includes/v21.2/misc/beta-release-warning.md b/src/current/_includes/v21.2/misc/beta-release-warning.md deleted file mode 100644 index c228f650d04..00000000000 --- a/src/current/_includes/v21.2/misc/beta-release-warning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -Beta releases are intended for testing and experimentation only. Beta releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/misc/beta-warning.md b/src/current/_includes/v21.2/misc/beta-warning.md deleted file mode 100644 index 107fc2bfa4b..00000000000 --- a/src/current/_includes/v21.2/misc/beta-warning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -**This is a beta feature.** It is currently undergoing continued testing. Please [file a Github issue](file-an-issue.html) with us if you identify a bug. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/misc/chrome-localhost.md b/src/current/_includes/v21.2/misc/chrome-localhost.md deleted file mode 100644 index d794ff339d0..00000000000 --- a/src/current/_includes/v21.2/misc/chrome-localhost.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -If you are using Google Chrome, and you are getting an error about not being able to reach `localhost` because its certificate has been revoked, go to chrome://flags/#allow-insecure-localhost, enable "Allow invalid certificates for resources loaded from localhost", and then restart the browser. Enabling this Chrome feature degrades security for all sites running on `localhost`, not just CockroachDB's DB Console, so be sure to enable the feature only temporarily. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/misc/client-side-intervention-example.md b/src/current/_includes/v21.2/misc/client-side-intervention-example.md deleted file mode 100644 index d0bbfc33695..00000000000 --- a/src/current/_includes/v21.2/misc/client-side-intervention-example.md +++ /dev/null @@ -1,28 +0,0 @@ -The Python-like pseudocode below shows how to implement an application-level retry loop; it does not require your driver or ORM to implement [advanced retry handling logic](advanced-client-side-transaction-retries.html), so it can be used from any programming language or environment. In particular, your retry loop must: - -- Raise an error if the `max_retries` limit is reached -- Retry on `40001` error codes -- [`COMMIT`](commit-transaction.html) at the end of the `try` block -- Implement [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) logic as shown below for best performance - -~~~ python -while true: - n++ - if n == max_retries: - throw Error("did not succeed within N retries") - try: - # add logic here to run all your statements - conn.exec('COMMIT') - break - catch error: - if error.code != "40001": - throw error - else: - # This is a retry error, so we roll back the current transaction - # and sleep for a bit before retrying. The sleep time increases - # for each failed transaction. Adapted from - # https://colintemple.com/2017/03/java-exponential-backoff/ - conn.exec('ROLLBACK'); - sleep_ms = int(((2**n) * 100) + rand( 100 - 1 ) + 1) - sleep(sleep_ms) # Assumes your sleep() takes milliseconds -~~~ diff --git a/src/current/_includes/v21.2/misc/csv-import-callout.md b/src/current/_includes/v21.2/misc/csv-import-callout.md deleted file mode 100644 index 60555c5d0b6..00000000000 --- a/src/current/_includes/v21.2/misc/csv-import-callout.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The column order in your schema must match the column order in the file being imported. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/misc/customizing-the-savepoint-name.md b/src/current/_includes/v21.2/misc/customizing-the-savepoint-name.md deleted file mode 100644 index ed895f906f3..00000000000 --- a/src/current/_includes/v21.2/misc/customizing-the-savepoint-name.md +++ /dev/null @@ -1,5 +0,0 @@ -Set the `force_savepoint_restart` [session variable](set-vars.html#supported-variables) to `true` to enable using a custom name for the [retry savepoint](advanced-client-side-transaction-retries.html#retry-savepoints). - -Once this variable is set, the [`SAVEPOINT`](savepoint.html) statement will accept any name for the retry savepoint, not just `cockroach_restart`. In addition, it causes every savepoint name to be equivalent to `cockroach_restart`, therefore disallowing the use of [nested transactions](transactions.html#nested-transactions). - -This feature exists to support applications that want to use the [advanced client-side transaction retry protocol](advanced-client-side-transaction-retries.html), but cannot customize the name of savepoints to be `cockroach_restart`. For example, this may be necessary because you are using an ORM that requires its own names for savepoints. diff --git a/src/current/_includes/v21.2/misc/database-terms.md b/src/current/_includes/v21.2/misc/database-terms.md deleted file mode 100644 index 11d9bd67c92..00000000000 --- a/src/current/_includes/v21.2/misc/database-terms.md +++ /dev/null @@ -1,10 +0,0 @@ -## Database terms - -Term | Definition ------|----------- -**consistency** | The requirement that a transaction must change affected data only in allowed ways. CockroachDB uses "consistency" in both the sense of [ACID semantics](https://en.wikipedia.org/wiki/ACID) and the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), albeit less formally than either definition. -**isolation** | The degree to which a transaction may be affected by other transactions running at the same time. CockroachDB provides the [`SERIALIZABLE`](https://en.wikipedia.org/wiki/Serializability) isolation level, which is the highest possible and guarantees that every committed transaction has the same result as if each transaction were run one at a time. -**consensus** | The process of reaching agreement on whether a transaction is committed or aborted. CockroachDB uses the [Raft consensus protocol](#architecture-raft). In CockroachDB, when a range receives a write, a quorum of nodes containing replicas of the range acknowledge the write. This means your data is safely stored and a majority of nodes agree on the database's current state, even if some of the nodes are offline.

      When a write does not achieve consensus, forward progress halts to maintain consistency within the cluster. -**replication** | The process of creating and distributing copies of data, as well as ensuring that those copies remain consistent. CockroachDB requires all writes to propagate to a [quorum](https://en.wikipedia.org/wiki/Quorum_%28distributed_computing%29) of copies of the data before being considered committed. This ensures the consistency of your data. -**transaction** | A set of operations performed on a database that satisfy the requirements of [ACID semantics](https://en.wikipedia.org/wiki/ACID). This is a crucial feature for a consistent system to ensure developers can trust the data in their database. For more information about how transactions work in CockroachDB, see [Transaction Layer](transaction-layer.html). -**multi-active availability** | A consensus-based notion of high availability that lets each node in the cluster handle reads and writes for a subset of the stored data (on a per-range basis). This is in contrast to _active-passive replication_, in which the active node receives 100% of request traffic, and _active-active_ replication, in which all nodes accept requests but typically cannot guarantee that reads are both up-to-date and fast. diff --git a/src/current/_includes/v21.2/misc/debug-subcommands.md b/src/current/_includes/v21.2/misc/debug-subcommands.md deleted file mode 100644 index 4f6f7d1c678..00000000000 --- a/src/current/_includes/v21.2/misc/debug-subcommands.md +++ /dev/null @@ -1,5 +0,0 @@ -While the `cockroach debug` command has a few subcommands, users are expected to use only the [`zip`](cockroach-debug-zip.html), [`encryption-active-key`](cockroach-debug-encryption-active-key.html), [`merge-logs`](cockroach-debug-merge-logs.html), [`list-files`](cockroach-debug-list-files.html), [`tsdump`](cockroach-debug-tsdump.html), and [`ballast`](cockroach-debug-ballast.html) subcommands. - -We recommend using the [`job-trace`](cockroach-debug-job-trace.html) subcommand only when directed by the [Cockroach Labs support team](support-resources.html). - -The other `debug` subcommands are useful only to CockroachDB's developers and contributors. diff --git a/src/current/_includes/v21.2/misc/delete-statistics.md b/src/current/_includes/v21.2/misc/delete-statistics.md deleted file mode 100644 index 3e4c71db3ec..00000000000 --- a/src/current/_includes/v21.2/misc/delete-statistics.md +++ /dev/null @@ -1,15 +0,0 @@ -To delete statistics for all tables in all databases: - -{% include_cached copy-clipboard.html %} -~~~ sql -DELETE FROM system.table_statistics WHERE true; -~~~ - -To delete a named set of statistics (e.g, one named "users_stats"), run a query like the following: - -{% include_cached copy-clipboard.html %} -~~~ sql -DELETE FROM system.table_statistics WHERE name = 'users_stats'; -~~~ - -For more information about the `DELETE` statement, see [`DELETE`](delete.html). diff --git a/src/current/_includes/v21.2/misc/diagnostics-callout.html b/src/current/_includes/v21.2/misc/diagnostics-callout.html deleted file mode 100644 index a969a8cf152..00000000000 --- a/src/current/_includes/v21.2/misc/diagnostics-callout.html +++ /dev/null @@ -1 +0,0 @@ -{{site.data.alerts.callout_info}}By default, each node of a CockroachDB cluster periodically shares anonymous usage details with Cockroach Labs. For an explanation of the details that get shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/misc/enterprise-features.md b/src/current/_includes/v21.2/misc/enterprise-features.md deleted file mode 100644 index ebfcec047e0..00000000000 --- a/src/current/_includes/v21.2/misc/enterprise-features.md +++ /dev/null @@ -1,10 +0,0 @@ -Feature | Description ---------+------------------------- -[Multi-Region Capabilities](multiregion-overview.html) | This feature gives you row-level control of how and where your data is stored to dramatically reduce read and write latencies and assist in meeting regulatory requirements in multi-region deployments. -[Follower Reads](follower-reads.html) | This feature reduces read latency in multi-region deployments by using the closest replica at the expense of reading slightly historical data. -[`BACKUP`](backup.html) | This feature creates backups of your cluster's schema and data that are consistent as of a given timestamp, stored on a service such as AWS S3, Google Cloud Storage, NFS, or HTTP storage.

      [Incremental backups](take-full-and-incremental-backups.html), [backups with revision history](take-backups-with-revision-history-and-restore-from-a-point-in-time.html), [locality-aware backups](take-and-restore-locality-aware-backups.html), and [encrypted backups](take-and-restore-encrypted-backups.html) require an Enterprise license. [Full backups](take-full-and-incremental-backups.html) do not require an Enterprise license. -[Changefeeds into a Configurable Sink](create-changefeed.html) | This feature targets an allowlist of tables. For every change, it emits a record to a configurable sink, either Apache Kafka or a cloud-storage sink, for downstream processing such as reporting, caching, or full-text indexing. -[Node Map](enable-node-map.html) | This feature visualizes the geographical configuration of a cluster by plotting node localities on a world map. -[Encryption at Rest](security-reference/encryption.html#encryption-at-rest-enterprise) | Supplementing CockroachDB's encryption in flight capabilities, this feature provides transparent encryption of a node's data on the local disk. It allows encryption of all files on disk using AES in counter mode, with all key sizes allowed. -[GSSAPI with Kerberos Authentication](gssapi_authentication.html) | CockroachDB supports the Generic Security Services API (GSSAPI) with Kerberos authentication, which lets you use an external enterprise directory system that supports Kerberos, such as Active Directory. -[Single Sign-on (SSO)](sso.html) | This feature lets you use an external identity provider for user access to the DB Console in a secure cluster. \ No newline at end of file diff --git a/src/current/_includes/v21.2/misc/explore-benefits-see-also.md b/src/current/_includes/v21.2/misc/explore-benefits-see-also.md deleted file mode 100644 index 6b1a3afed71..00000000000 --- a/src/current/_includes/v21.2/misc/explore-benefits-see-also.md +++ /dev/null @@ -1,7 +0,0 @@ -- [Replication & Rebalancing](demo-replication-and-rebalancing.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Low Latency Multi-Region Deployment](demo-low-latency-multi-region-deployment.html) -- [Serializable Transactions](demo-serializable.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html) -- [JSON Support](demo-json-support.html) diff --git a/src/current/_includes/v21.2/misc/force-index-selection.md b/src/current/_includes/v21.2/misc/force-index-selection.md deleted file mode 100644 index 5a14daa6f2a..00000000000 --- a/src/current/_includes/v21.2/misc/force-index-selection.md +++ /dev/null @@ -1,145 +0,0 @@ -By using the explicit index annotation, you can override [CockroachDB's index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) and use a specific [index](indexes.html) when reading from a named table. - -{{site.data.alerts.callout_info}} -Index selection can impact [performance](performance-best-practices-overview.html), but does not change the result of a query. -{{site.data.alerts.end}} - -##### Force index scan - -The syntax to force a scan of a specific index is: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@my_idx; -~~~ - -This is equivalent to the longer expression: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@{FORCE_INDEX=my_idx}; -~~~ - -##### Force reverse scan - -The syntax to force a reverse scan of a specific index is: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@{FORCE_INDEX=my_idx,DESC}; -~~~ - -Forcing a reverse scan is sometimes useful during [performance tuning](performance-best-practices-overview.html). For reference, the full syntax for choosing an index and its scan direction is - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM table@{FORCE_INDEX=idx[,DIRECTION]} -~~~ - -where the optional `DIRECTION` is either `ASC` (ascending) or `DESC` (descending). - -When a direction is specified, that scan direction is forced; otherwise the [cost-based optimizer](cost-based-optimizer.html) is free to choose the direction it calculates will result in the best performance. - -You can verify that the optimizer is choosing your desired scan direction using [`EXPLAIN (OPT)`](explain.html#opt-option). For example, given the table - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE kv (K INT PRIMARY KEY, v INT); -~~~ - -you can check the scan direction with: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (opt) SELECT * FROM users@{FORCE_INDEX=primary,DESC}; -~~~ - -~~~ - text -+-------------------------------------+ - scan users,rev - └── flags: force-index=primary,rev -(2 rows) -~~~ - -##### Force partial index scan - -To force a [partial index scan](partial-indexes.html), your statement must have a `WHERE` clause that implies the partial index filter. - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE t ( - a INT, - INDEX idx (a) WHERE a > 0); -INSERT INTO t(a) VALUES (5); -SELECT * FROM t@idx WHERE a > 0; -~~~ - -~~~ -CREATE TABLE - -Time: 13ms total (execution 12ms / network 0ms) - -INSERT 1 - -Time: 22ms total (execution 21ms / network 0ms) - - a ------ - 5 -(1 row) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -##### Force partial GIN index scan - -To force a [partial GIN index](inverted-indexes.html#partial-gin-indexes) scan, your statement must have a `WHERE` clause that: - -- Implies the partial index. -- Constrains the GIN index scan. - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE t; -CREATE TABLE t ( - j JSON, - INVERTED INDEX idx (j) WHERE j->'a' = '1'); -INSERT INTO t(j) - VALUES ('{"a": 1}'), - ('{"a": 3, "b": 2}'), - ('{"a": 1, "b": 2}'); -SELECT * FROM t@idx WHERE j->'a' = '1' AND j->'b' = '2'; -~~~ - -~~~ -DROP TABLE - -Time: 68ms total (execution 22ms / network 45ms) - -CREATE TABLE - -Time: 10ms total (execution 10ms / network 0ms) - -INSERT 3 - -Time: 22ms total (execution 22ms / network 0ms) - - j --------------------- - {"a": 1, "b": 2} -(1 row) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -##### Prevent full scan - -To prevent the optimizer from planning a full scan for a table, specify the `NO_FULL_SCAN` index hint. For example: - -~~~sql -SELECT * FROM table_name@{NO_FULL_SCAN}; -~~~ - -To prevent a full scan of a [partial index](#force-partial-index-scan), you must specify `NO_FULL_SCAN` _in combination with_ the partial index using `FORCE_INDEX=index_name`. -If you specify only `NO_FULL_SCAN`, a full scan of a partial index may be planned. diff --git a/src/current/_includes/v21.2/misc/gce-locations.md b/src/current/_includes/v21.2/misc/gce-locations.md deleted file mode 100644 index 22122aae78d..00000000000 --- a/src/current/_includes/v21.2/misc/gce-locations.md +++ /dev/null @@ -1,18 +0,0 @@ -| Location | SQL Statement | -| ------ | ------ | -| us-east1 (South Carolina) | `INSERT into system.locations VALUES ('region', 'us-east1', 33.836082, -81.163727)` | -| us-east4 (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east4', 37.478397, -76.453077)` | -| us-central1 (Iowa) | `INSERT into system.locations VALUES ('region', 'us-central1', 42.032974, -93.581543)` | -| us-west1 (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west1', 43.804133, -120.554201)` | -| northamerica-northeast1 (Montreal) | `INSERT into system.locations VALUES ('region', 'northamerica-northeast1', 56.130366, -106.346771)` | -| europe-west1 (Belgium) | `INSERT into system.locations VALUES ('region', 'europe-west1', 50.44816, 3.81886)` | -| europe-west2 (London) | `INSERT into system.locations VALUES ('region', 'europe-west2', 51.507351, -0.127758)` | -| europe-west3 (Frankfurt) | `INSERT into system.locations VALUES ('region', 'europe-west3', 50.110922, 8.682127)` | -| europe-west4 (Netherlands) | `INSERT into system.locations VALUES ('region', 'europe-west4', 53.4386, 6.8355)` | -| europe-west6 (Zürich) | `INSERT into system.locations VALUES ('region', 'europe-west6', 47.3769, 8.5417)` | -| asia-east1 (Taiwan) | `INSERT into system.locations VALUES ('region', 'asia-east1', 24.0717, 120.5624)` | -| asia-northeast1 (Tokyo) | `INSERT into system.locations VALUES ('region', 'asia-northeast1', 35.689487, 139.691706)` | -| asia-southeast1 (Singapore) | `INSERT into system.locations VALUES ('region', 'asia-southeast1', 1.352083, 103.819836)` | -| australia-southeast1 (Sydney) | `INSERT into system.locations VALUES ('region', 'australia-southeast1', -33.86882, 151.209296)` | -| asia-south1 (Mumbai) | `INSERT into system.locations VALUES ('region', 'asia-south1', 19.075984, 72.877656)` | -| southamerica-east1 (São Paulo) | `INSERT into system.locations VALUES ('region', 'southamerica-east1', -23.55052, -46.633309)` | diff --git a/src/current/_includes/v21.2/misc/geojson_geometry_note.md b/src/current/_includes/v21.2/misc/geojson_geometry_note.md deleted file mode 100644 index ba5fe199657..00000000000 --- a/src/current/_includes/v21.2/misc/geojson_geometry_note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The screenshots in these examples were generated using [geojson.io](http://geojson.io), but they are designed to showcase the shapes, not the map. Representing `GEOMETRY` data in GeoJSON can lead to unexpected results if using geometries with [SRIDs](spatial-glossary.html#srid) other than 4326 (as shown below). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/misc/haproxy.md b/src/current/_includes/v21.2/misc/haproxy.md deleted file mode 100644 index 375af8e937d..00000000000 --- a/src/current/_includes/v21.2/misc/haproxy.md +++ /dev/null @@ -1,39 +0,0 @@ -By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly: - - ~~~ - global - maxconn 4096 - - defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - - listen psql - bind :26257 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 :26257 check port 8080 - server cockroach2 :26257 check port 8080 - server cockroach3 :26257 check port 8080 - ~~~ - - The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster: - - Field | Description - ------|------------ - `timeout connect`
      `timeout client`
      `timeout server` | Timeout values that should be suitable for most deployments. - `bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.

      This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node. - `balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms. - `option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests. - `server` | For each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the [`--advertise-addr` flag](cockroach-start.html#networking) on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy. - - {{site.data.alerts.callout_info}} - For full details on these and other configuration settings, see the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html). - {{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/misc/htpp-import-only.md b/src/current/_includes/v21.2/misc/htpp-import-only.md deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/src/current/_includes/v21.2/misc/import-perf.md b/src/current/_includes/v21.2/misc/import-perf.md deleted file mode 100644 index b0520a9c392..00000000000 --- a/src/current/_includes/v21.2/misc/import-perf.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -For best practices for optimizing import performance in CockroachDB, see [Import Performance Best Practices](import-performance-best-practices.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/misc/install-next-steps.html b/src/current/_includes/v21.2/misc/install-next-steps.html deleted file mode 100644 index bb7a9ebc388..00000000000 --- a/src/current/_includes/v21.2/misc/install-next-steps.html +++ /dev/null @@ -1,16 +0,0 @@ - diff --git a/src/current/_includes/v21.2/misc/interleave-deprecation-note.md b/src/current/_includes/v21.2/misc/interleave-deprecation-note.md deleted file mode 100644 index bdd983430f4..00000000000 --- a/src/current/_includes/v21.2/misc/interleave-deprecation-note.md +++ /dev/null @@ -1 +0,0 @@ -{{site.data.alerts.callout_danger}}Interleaving data was deprecated in v20.2, disabled by default in v21.1, and permanently removed in v21.2. For details, see the [v21.1 interleaving deprecation notice](../v21.1/interleave-in-parent.html#deprecation).{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/misc/linux-binary-prereqs.md b/src/current/_includes/v21.2/misc/linux-binary-prereqs.md deleted file mode 100644 index 541183fe71b..00000000000 --- a/src/current/_includes/v21.2/misc/linux-binary-prereqs.md +++ /dev/null @@ -1 +0,0 @@ -

      The CockroachDB binary for Linux requires glibc, libncurses, and tzdata, which are found by default on nearly all Linux distributions, with Alpine as the notable exception.

      diff --git a/src/current/_includes/v21.2/misc/logging-defaults.md b/src/current/_includes/v21.2/misc/logging-defaults.md deleted file mode 100644 index 1a7ae68a536..00000000000 --- a/src/current/_includes/v21.2/misc/logging-defaults.md +++ /dev/null @@ -1,3 +0,0 @@ -By default, this command logs messages to `stderr`. This includes events with `WARNING` [severity](logging.html#logging-levels-severities) and higher. - -If you need to troubleshoot this command's behavior, you can [customize its logging behavior](configure-logs.html). \ No newline at end of file diff --git a/src/current/_includes/v21.2/misc/logging-flags.md b/src/current/_includes/v21.2/misc/logging-flags.md deleted file mode 100644 index eaadb6c8ddb..00000000000 --- a/src/current/_includes/v21.2/misc/logging-flags.md +++ /dev/null @@ -1,11 +0,0 @@ -Flag | Description ------|------------ -`--log` | Configure logging parameters by specifying a YAML payload. For details, see [Configure logs](configure-logs.html#flag). If a YAML configuration is not specified, the [default configuration](configure-logs.html#default-logging-configuration) is used.

      `--log-config-file` can also be used.

      **Note:** The deprecated logging flags below cannot be combined with `--log`, and can be defined instead in the YAML payload. -`--log-config-file` | Configure logging parameters by specifying a path to a YAML file. For details, see [Configure logs](configure-logs.html#flag). If a YAML configuration is not specified, the [default configuration](configure-logs.html#default-logging-configuration) is used.

      `--log` can also be used.

      **Note:** The deprecated logging flags below cannot be combined with `--log-config-file`, and can be defined instead in the YAML payload. -`--log-dir` | **Deprecated.** To enable logging to files and write logs to the specified directory, use [`--log`](configure-logs.html#flag) and set `dir` in the YAML configuration.

      Setting `--log-dir` to a blank directory (`--log-dir=`) disables logging to files. Do not use `--log-dir=""`; this creates a new directory named `""` and stores log files in that directory. -`--log-group-max-size` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. After the logging group (i.e., `cockroach`, `cockroach-sql-audit`, `cockroach-auth`, `cockroach-sql-exec`, `cockroach-pebble`) reaches the specified size, delete the oldest log file. The flag's argument takes standard file sizes, such as `--log-group-max-size=1GiB`.

      **Default**: 100MiB -`--log-file-max-size` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. After logs reach the specified size, begin writing logs to a new file. The flag's argument takes standard file sizes, such as `--log-file-max-size=2MiB`.

      **Default**: 10MiB -`--log-file-verbosity` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. Only writes messages to log files if they are at or above the specified [severity level](logging.html#logging-levels-severities), such as `--log-file-verbosity=WARNING`. **Requires** logging to files.

      **Default**: `INFO` -`--logtostderr` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. Enable logging to `stderr` for messages at or above the specified [severity level](logging.html#logging-levels-severities), such as `--logtostderr=ERROR`

      If you use this flag without specifying the severity level (e.g., `cockroach start --logtostderr`), it prints messages of *all* severities to `stderr`.

      Setting `--logtostderr=NONE` disables logging to `stderr`. -`--no-color` | Do not colorize `stderr`. Possible values: `true` or `false`.

      When set to `false`, messages logged to `stderr` are colorized based on [severity level](logging.html#logging-levels-severities).

      **Default:** `false` -`--sql-audit-dir` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. If non-empty, output the `SENSITIVE_ACCESS` [logging channel](logging-overview.html#logging-channels) to this directory.

      Note that enabling `SENSITIVE_ACCESS` logs can negatively impact performance. As a result, we recommend using the `SENSITIVE_ACCESS` channel for security purposes only. For more information, see [Logging use cases](logging-use-cases.html#security-and-audit-monitoring). diff --git a/src/current/_includes/v21.2/misc/movr-live-demo.md b/src/current/_includes/v21.2/misc/movr-live-demo.md deleted file mode 100644 index f8cfb24cb21..00000000000 --- a/src/current/_includes/v21.2/misc/movr-live-demo.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -For a live demo of the deployed example application, see [https://movr.cloud](https://movr.cloud). -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/misc/movr-schema.md b/src/current/_includes/v21.2/misc/movr-schema.md deleted file mode 100644 index 4df488a279c..00000000000 --- a/src/current/_includes/v21.2/misc/movr-schema.md +++ /dev/null @@ -1,12 +0,0 @@ -The six tables in the `movr` database store user, vehicle, and ride data for MovR: - -Table | Description ---------|---------------------------- -`users` | People registered for the service. -`vehicles` | The pool of vehicles available for the service. -`rides` | When and where users have rented a vehicle. -`promo_codes` | Promotional codes for users. -`user_promo_codes` | Promotional codes in use by users. -`vehicle_location_histories` | Vehicle location history. - -Geo-partitioning schema diff --git a/src/current/_includes/v21.2/misc/movr-workflow.md b/src/current/_includes/v21.2/misc/movr-workflow.md deleted file mode 100644 index 948d95dc1de..00000000000 --- a/src/current/_includes/v21.2/misc/movr-workflow.md +++ /dev/null @@ -1,76 +0,0 @@ -The workflow for MovR is as follows: - -1. A user loads the app and sees the 25 closest vehicles. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT id, city, status FROM vehicles WHERE city='amsterdam' limit 25; - ~~~ - -2. The user signs up for the service. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO users (id, name, address, city, credit_card) - VALUES ('66666666-6666-4400-8000-00000000000f', 'Mariah Lam', '88194 Angela Gardens Suite 60', 'amsterdam', '123245696'); - ~~~ - - {{site.data.alerts.callout_info}}Usually for Universally Unique Identifier (UUID) you would need to generate it automatically but for the sake of this follow up we will use predetermined UUID to keep track of them in our examples.{{site.data.alerts.end}} - -3. In some cases, the user adds their own vehicle to share. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO vehicles (id, city, type, owner_id,creation_time,status, current_location, ext) - VALUES ('ffffffff-ffff-4400-8000-00000000000f', 'amsterdam', 'skateboard', '66666666-6666-4400-8000-00000000000f', current_timestamp(), 'available', '88194 Angela Gardens Suite 60', '{"color": "blue"}'); - ~~~ -4. More often, the user reserves a vehicle and starts a ride, applying a promo code, if available and valid. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT code FROM user_promo_codes WHERE user_id ='66666666-6666-4400-8000-00000000000f'; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE vehicles SET status = 'in_use' WHERE id='bbbbbbbb-bbbb-4800-8000-00000000000b'; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO rides (id, city, vehicle_city, rider_id, vehicle_id, start_address,end_address, start_time, end_time, revenue) - VALUES ('cd032f56-cf1a-4800-8000-00000000066f', 'amsterdam', 'amsterdam', '66666666-6666-4400-8000-00000000000f', 'bbbbbbbb-bbbb-4800-8000-00000000000b', '70458 Mary Crest', '', TIMESTAMP '2020-10-01 10:00:00.123456', NULL, 0.0); - ~~~ - -5. During the ride, MovR tracks the location of the vehicle. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO vehicle_location_histories (city, ride_id, timestamp, lat, long) - VALUES ('amsterdam', 'cd032f56-cf1a-4800-8000-00000000066f', current_timestamp(), -101, 60); - ~~~ - -6. The user ends the ride and releases the vehicle. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE vehicles SET status = 'available' WHERE id='bbbbbbbb-bbbb-4800-8000-00000000000b'; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE rides SET end_address ='33862 Charles Junctions Apt. 49', end_time=TIMESTAMP '2020-10-01 10:30:00.123456', revenue=88.6 - WHERE id='cd032f56-cf1a-4800-8000-00000000066f'; - ~~~ diff --git a/src/current/_includes/v21.2/misc/multiregion-max-offset.md b/src/current/_includes/v21.2/misc/multiregion-max-offset.md deleted file mode 100644 index 07a0dab59c3..00000000000 --- a/src/current/_includes/v21.2/misc/multiregion-max-offset.md +++ /dev/null @@ -1 +0,0 @@ -For new clusters using the [multi-region SQL abstractions](multiregion-overview.html), Cockroach Labs recommends lowering the [`--max-offset`](cockroach-start.html#flags-max-offset) setting to `250ms`. This setting is especially helpful for lowering the write latency of [global tables](multiregion-overview.html#global-tables). For existing clusters, changing the setting will require restarting all of the nodes in your cluster at the same time; it cannot be done with a rolling restart. diff --git a/src/current/_includes/v21.2/misc/non-http-source-privileges.md b/src/current/_includes/v21.2/misc/non-http-source-privileges.md deleted file mode 100644 index dfea2d411e2..00000000000 --- a/src/current/_includes/v21.2/misc/non-http-source-privileges.md +++ /dev/null @@ -1,12 +0,0 @@ -The source file URL does **not** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default. -- [Userfile](use-userfile-for-bulk-operations.html) - -The source file URL **does** require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 or GS using `IMPLICIT` credentials -- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3 -- [Nodelocal](cockroach-nodelocal-upload.html) - -We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). diff --git a/src/current/_includes/v21.2/misc/remove-user-callout.html b/src/current/_includes/v21.2/misc/remove-user-callout.html deleted file mode 100644 index 925f83d779d..00000000000 --- a/src/current/_includes/v21.2/misc/remove-user-callout.html +++ /dev/null @@ -1 +0,0 @@ -Removing a user does not remove that user's privileges. Therefore, to prevent a future user with an identical username from inheriting an old user's privileges, it's important to revoke a user's privileges before or after removing the user. diff --git a/src/current/_includes/v21.2/misc/s3-compatible-warning.md b/src/current/_includes/v21.2/misc/s3-compatible-warning.md deleted file mode 100644 index 1e12b5611d3..00000000000 --- a/src/current/_includes/v21.2/misc/s3-compatible-warning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -While Cockroach Labs actively tests Amazon S3, Google Cloud Storage, and Azure Storage, we **do not** test [S3-compatible services](use-cloud-storage-for-bulk-operations.html) (e.g., [MinIO](https://min.io/), [Red Hat Ceph](https://docs.ceph.com/en/pacific/radosgw/s3/)). -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/misc/schema-change-stmt-note.md b/src/current/_includes/v21.2/misc/schema-change-stmt-note.md deleted file mode 100644 index 576fa59a39c..00000000000 --- a/src/current/_includes/v21.2/misc/schema-change-stmt-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The `{{ page.title }}` statement performs a schema change. For more information about how online schema changes work in CockroachDB, see [Online Schema Changes](online-schema-changes.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/misc/schema-change-view-job.md b/src/current/_includes/v21.2/misc/schema-change-view-job.md deleted file mode 100644 index 8861174d621..00000000000 --- a/src/current/_includes/v21.2/misc/schema-change-view-job.md +++ /dev/null @@ -1 +0,0 @@ -This schema change statement is registered as a job. You can view long-running jobs with [`SHOW JOBS`](show-jobs.html). diff --git a/src/current/_includes/v21.2/misc/session-vars.html b/src/current/_includes/v21.2/misc/session-vars.html deleted file mode 100644 index 8c485ef60af..00000000000 --- a/src/current/_includes/v21.2/misc/session-vars.html +++ /dev/null @@ -1,854 +0,0 @@ -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Variable nameDescriptionInitial valueModify with - SET - ?View with - SHOW - ?
- application_name - The current application name for statistics collection.Empty string, or cockroach for sessions from the built-in SQL client.YesYes
- bytea_output - The mode for conversions from STRING to BYTES.hexYesYes
- client_min_messages - The severity level of notices displayed in the SQL shell. -
Accepted values include debug5, debug4, debug3, debug2, debug1, log, notice, warning, and error.
- notice - YesYes
- crdb_version - The version of CockroachDB.CockroachDB OSS versionNoYes
- database - The current database.Database in connection string, or empty if not specified.YesYes
- datestyle - {% include_cached new-in.html version="v21.2" %} The input string format for DATE and TIMESTAMP values. -
Accepted values include ISO,MDY, ISO,DMY, and ISO,YMD. -
To set datestyle to a value other than the default (ISO,MDY), you must first set the datestyle_enabled session variable to true.
The value set by the sql.defaults.datestyle cluster setting (ISO,MDY, by default).YesYes
- datestyle_enabled - {% include_cached new-in.html version="v21.2" %} Enables setting the datestyle session variable to a supported format.The value set by the sql.defaults.datestyle.enabled cluster setting (false, by default).YesYes
- default_int_size - The size, in bytes, of an INT type. - 8 - YesYes
- default_transaction_isolation - All transactions execute with SERIALIZABLE isolation. See Transactions: Isolation levels. - SERIALIZABLE - NoYes
- default_transaction_priority - The default transaction priority for the current session. -
The supported options are low, normal, and high.
- normal - YesYes
- default_transaction_read_only - The default transaction access mode for the current session. -
If set to on, only read operations are allowed in transactions in the current session; if set to off, both read and write operations are allowed. See SET TRANSACTION for more details.
- off - YesYes
- default_transaction_use_follower_reads - If set to on, all read-only transactions use AS OF SYSTEM TIME follower_read_timestamp(), to allow the transaction to use follower reads.
If set to off, read-only transactions will only use follower reads if an AS OF SYSTEM TIME clause is specified in the statement, with an interval of at least 4.8 seconds.
- off - YesYes
- disallow_full_table_scans - If set to on, all queries that have planned a full table or full secondary index scan will return an error message. -
This setting does not apply to internal queries, which may plan full table or index scans without checking the session variable.
- off - YesYes
- distsql - The query distribution mode for the session. -
By default, CockroachDB determines which queries are faster to execute if distributed across multiple nodes, and all other queries are run through the gateway node.
- auto - YesYes
- enable_drop_enum_value - Indicates whether DROP VALUE clauses are enabled for ALTER TYPE statements. - off - YesYes
- - enable_implicit_select_for_update - Indicates whether UPDATE and UPSERT statements acquire locks using the FOR UPDATE locking mode during their initial row scan, which improves performance for contended workloads. -
For more information about how FOR UPDATE locking works, see the documentation for SELECT FOR UPDATE.
- on - YesYes
- enable_insert_fast_path - Indicates whether CockroachDB will use a specialized execution operator for inserting into a table. We recommend leaving this setting on. - on - YesYes
- enable_zigzag_join - Indicates whether the cost-based optimizer will plan certain queries using a zig-zag merge join algorithm, which searches for the desired intersection by jumping back and forth between the indexes based on the fact that after constraining indexes, they share an ordering. - on - YesYes
- extra_float_digits - The number of digits displayed for floating-point values. -
Only values between -15 and 3 are supported.
- 0 - YesYes
force_savepoint_restartWhen set to true, allows the SAVEPOINT statement to accept any name for a savepoint. - off - YesYes
foreign_key_cascades_limit Limits the number of cascading operations that run as part of a single query. - 10000 - YesYes
idle_in_session_timeoutAutomatically terminates sessions that idle past the specified threshold.
When set to 0, the session will not timeout.
The value set by the sql.defaults.idle_in_session_timeout cluster setting (0s, by default).YesYes
- idle_in_transaction_session_timeout - Automatically terminates sessions that are idle in a transaction past the specified threshold.
When set to 0, the session will not timeout.
The value set by the sql.defaults.idle_in_transaction_session_timeout cluster setting (0s, by default).YesYes
- intervalstyle - {% include_cached new-in.html version="v21.2" %} The input string format for INTERVAL values. -
Accepted values include postgres, iso_8601, and sql_standard. -
To set intervalstyle to a value other than the default (postgres), you must first set the intervalstyle_enabled session variable to true.
The value set by the sql.defaults.intervalstyle cluster setting (postgres, by default).YesYes
- intervalstyle_enabled - {% include_cached new-in.html version="v21.2" %} Enables setting the intervalstyle session variable to a supported format.The value set by the sql.defaults.intervalstyle.enabled cluster setting (false, by default).YesYes
- is_superuser - {% include_cached new-in.html version="v21.2" %} If on or true, the current user is a member of the `admin` role.User-dependentNoYes
- large_full_scan_rows - {% include_cached new-in.html version="v21.2" %} Determines which tables are considered "large" such that disallow_full_table_scans rejects full table or index scans of "large" tables. The default value is 1000. To reject all full table or index scans, set to 0.User-dependentNoYes
- locality - The location of the node.
For more information, see Locality.
Node-dependentNoYes
- lock_timeout - {% include_cached new-in.html version="v21.2" %} The amount of time a query can spend acquiring or waiting for a single row-level lock.
- In CockroachDB, unlike in PostgreSQL, non-locking reads wait for conflicting locks to be released. As a result, the lock_timeout configuration applies to writes, and to locking and non-locking reads in read-write and read-only transactions.
- If lock_timeout = 0, queries do not timeout due to lock acquisitions. -
- The value set by the sql.defaults.lock_timeout cluster setting (0, by default) - YesYes
- node_id - The ID of the node currently connected to.
-
This variable is particularly useful for verifying load balanced connections.
Node-dependentNoYes
- optimizer_use_histograms - If on, the optimizer uses collected histograms for cardinality estimation. - on - NoYes
- optimizer_use_multicol_stats - If on, the optimizer uses collected multi-column statistics for cardinality estimation. - on - NoYes
- prefer_lookup_joins_for_fks - If on, the optimizer prefers lookup joins to merge joins when performing foreign key checks. - off - YesYes
- reorder_joins_limit - Maximum number of joins that the optimizer will attempt to reorder when searching for an optimal query execution plan. -
For more information, see Join reordering.
- 8 - YesYes
- results_buffer_size - The default size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client. -
This can also be set for all connections using the sql.defaults.results_buffer_size cluster setting. Note that auto-retries generally only happen while no results have been delivered to the client, so reducing this size can increase the number of retriable errors a client receives. On the other hand, increasing the buffer size can increase the delay until the client receives the first result row. Setting to 0 disables any buffering. -
- 16384 - YesYes
- require_explicit_primary_keys - If on, CockroachDB throws on error for all tables created without an explicit primary key defined. - - off - YesYes
- search_path - A list of schemas that will be searched to resolve unqualified table or function names. -
For more details, see SQL name resolution.
- public - YesYes
- serial_normalization - Specifies the default handling of SERIAL in table definitions. Valid options include 'rowid', 'virtual_sequence', sql_sequence, and sql_sequence_cached. -
If set to 'virtual_sequence', the SERIAL type auto-creates a sequence for better compatibility with Hibernate sequences. -
If set to sql_sequence_cached, the sql.defaults.serial_sequences_cache_size cluster setting can be used to control the number of values to cache in a user's session, with a default of 256.
- 'rowid' - YesYes
- server_version - The version of PostgreSQL that CockroachDB emulates.Version-dependentNoYes
- server_version_num - The version of PostgreSQL that CockroachDB emulates.Version-dependentYesYes
- session_id - The ID of the current session.Session-dependentNoYes
- session_user - The user connected for the current session.User in connection stringNoYes
- sql_safe_updates - If false, potentially unsafe SQL statements are allowed, including DROP of a non-empty database and all dependent objects, DELETE without a WHERE clause, UPDATE without a WHERE clause, and ALTER TABLE .. DROP COLUMN. -
See Allow Potentially Unsafe SQL Statements for more details.
- true for interactive sessions from the built-in SQL client,
false for sessions from other clients
YesYes
- statement_timeout - The amount of time a statement can run before being stopped. -
This value can be an int (e.g., 10) and will be interpreted as milliseconds. It can also be an interval or string argument, where the string can be parsed as a valid interval (e.g., '4s'). -
A value of 0 turns it off.
The value set by the sql.defaults.statement_timeout cluster setting (0s, by default).YesYes
- stub_catalog_tables - If off, querying an unimplemented, empty pg_catalog table will result in an error, as is the case in v20.2 and earlier. -
If on, querying an unimplemented, empty pg_catalog table simply returns no rows.
- on - YesYes
- timezone - The default time zone for the current session. -
This session variable was named "time zone" (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- UTC - YesYes
- tracing - The trace recording state. - off - - Yes
- transaction_isolation - All transactions execute with SERIALIZABLE isolation. -
See Transactions: Isolation levels. -
This session variable was called transaction isolation level (with spaces) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- SERIALIZABLE - NoYes
- transaction_priority - The priority of the current transaction. -
See Transactions: Transaction priorities for more details. -
This session variable was called transaction priority (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- NORMAL - YesYes
- transaction_read_only - The access mode of the current transaction. -
See Set Transaction for more details.
- off - YesYes
- transaction_rows_read_err - {% include_cached new-in.html version="v21.2" %} The limit for the number of rows read by a SQL transaction. If this value is exceeded the transaction will fail (or the event will be logged to SQL_INTERNAL_PERF for internal transactions). - 0 - YesYes
- transaction_rows_read_log - {% include_cached new-in.html version="v21.2" %} The threshold for the number of rows read by a SQL transaction. If this value is exceeded, the event will be logged to SQL_PERF (or SQL_INTERNAL_PERF for internal transactions). - 0 - YesYes
- transaction_rows_written_err - {% include_cached new-in.html version="v21.2" %} The limit for the number of rows written by a SQL transaction. If this value is exceeded the transaction will fail (or the event will be logged to SQL_INTERNAL_PERF for internal transactions). - 0 - YesYes
- transaction_rows_written_log - {% include_cached new-in.html version="v21.2" %} The threshold for the number of rows written by a SQL transaction. If this value is exceeded, the event will be logged to SQL_PERF (or SQL_INTERNAL_PERF for internal transactions). - 0 - YesYes
- transaction_status - The state of the current transaction. -
See Transactions for more details. -
This session variable was called transaction status (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- NoTxn - NoYes
- vectorize - The vectorized execution engine mode. -
Options include on and off. -
For more details, see Configure vectorized execution for CockroachDB. -
- on - YesYes
- backslash_quote - (Reserved; exposed only for ORM compatibility.) - safe_encoding - NoYes
- client_encoding - (Reserved; exposed only for ORM compatibility.) - UTF8 - NoYes
- default_tablespace - (Reserved; exposed only for ORM compatibility.) - - NoYes
- enable_seqscan - (Reserved; exposed only for ORM compatibility.) - on - YesYes
- escape_string_warning - (Reserved; exposed only for ORM compatibility.) - on - NoYes
- integer_datetimes - (Reserved; exposed only for ORM compatibility.) - on - NoYes
- max_identifier_length - (Reserved; exposed only for ORM compatibility.) - 128 - NoYes
- max_index_keys - (Reserved; exposed only for ORM compatibility.) - 32 - NoYes
- row_security - (Reserved; exposed only for ORM compatibility.) - off - NoYes
- standard_conforming_strings - (Reserved; exposed only for ORM compatibility.) - on - NoYes
- server_encoding - (Reserved; exposed only for ORM compatibility.) - UTF8 - YesYes
- synchronize_seqscans - (Reserved; exposed only for ORM compatibility.) - on - NoYes
- synchronous_commit - (Reserved; exposed only for ORM compatibility.) - on - YesYes
- troubleshooting_mode_enabled - When enabled, avoid performing additional work on queries, such as collecting and emitting telemetry data. This session variable is particularly useful when the cluster is experiencing issues, unavailability, or failure. - off - YesYes
diff --git a/src/current/_includes/v21.2/misc/set-enterprise-license.md b/src/current/_includes/v21.2/misc/set-enterprise-license.md deleted file mode 100644 index 55d71273c32..00000000000 --- a/src/current/_includes/v21.2/misc/set-enterprise-license.md +++ /dev/null @@ -1,16 +0,0 @@ -As the CockroachDB `root` user, open the [built-in SQL shell](cockroach-sql.html) in insecure or secure mode, as per your CockroachDB setup. In the following example, we assume that CockroachDB is running in insecure mode. Then use the [`SET CLUSTER SETTING`](set-cluster-setting.html) command to set the name of your organization and the license key: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING cluster.organization = 'Acme Company'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING enterprise.license = 'xxxxxxxxxxxx'; -~~~ diff --git a/src/current/_includes/v21.2/misc/sorting-delete-output.md b/src/current/_includes/v21.2/misc/sorting-delete-output.md deleted file mode 100644 index a67c7cb3229..00000000000 --- a/src/current/_includes/v21.2/misc/sorting-delete-output.md +++ /dev/null @@ -1,9 +0,0 @@ -To sort the output of a `DELETE` statement, use: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH a AS (DELETE ... RETURNING ...) - SELECT ... FROM a ORDER BY ... -~~~ - -For an example, see [Sort and return deleted rows](delete.html#sort-and-return-deleted-rows). diff --git a/src/current/_includes/v21.2/misc/source-privileges.md b/src/current/_includes/v21.2/misc/source-privileges.md deleted file mode 100644 index 135a153b83f..00000000000 --- a/src/current/_includes/v21.2/misc/source-privileges.md +++ /dev/null @@ -1,12 +0,0 @@ -The source file URL does _not_ require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default. -- [Userfile](use-userfile-for-bulk-operations.html) - -The source file URL _does_ require the [`admin` role](security-reference/authorization.html#admin-role) in the following scenarios: - -- S3 or GS using `IMPLICIT` credentials -- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3 -- [Nodelocal](cockroach-nodelocal-upload.html), [HTTP](use-a-local-file-server-for-bulk-operations.html) or [HTTPS] (use-a-local-file-server-for-bulk-operations.html) - -We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). diff --git a/src/current/_includes/v21.2/misc/storage-class-glacier-incremental.md b/src/current/_includes/v21.2/misc/storage-class-glacier-incremental.md deleted file mode 100644 index 92d1f6cf90d..00000000000 --- a/src/current/_includes/v21.2/misc/storage-class-glacier-incremental.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -[Incremental backups](take-full-and-incremental-backups.html#incremental-backups) are **not** compatible with the S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes. Incremental backups require ad-hoc reading of previous backups. The Glacier Flexible Retrieval or Glacier Deep Archive storage classes do not allow immediate access to S3 objects without first restoring the objects. See Amazon's documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for more detail. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/misc/storage-classes.md b/src/current/_includes/v21.2/misc/storage-classes.md deleted file mode 100644 index c4dafce941e..00000000000 --- a/src/current/_includes/v21.2/misc/storage-classes.md +++ /dev/null @@ -1 +0,0 @@ -Use the parameter to set one of these [storage classes](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#AmazonS3-PutObject-request-header-StorageClass) listed in Amazon's documentation. For more general usage information, see Amazon's [Using Amazon S3 storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html) documentation. diff --git a/src/current/_includes/v21.2/misc/tooling.md b/src/current/_includes/v21.2/misc/tooling.md deleted file mode 100644 index 50678ca2c41..00000000000 --- a/src/current/_includes/v21.2/misc/tooling.md +++ /dev/null @@ -1,70 +0,0 @@ -## Support levels - -Cockroach Labs has partnered with open-source projects, vendors, and individuals to offer the following levels of support with third-party tools: - -- **Full support** indicates that Cockroach Labs is committed to maintaining compatibility with the vast majority of the tool's features. CockroachDB is regularly tested against the latest version documented in the table below. -- **Partial support** indicates that Cockroach Labs is working towards full support for the tool. The primary features of the tool are compatible with CockroachDB (e.g., connecting and basic database operations), but full integration may require additional steps, lack support for all features, or exhibit unexpected behavior. - -{{site.data.alerts.callout_info}} -Unless explicitly stated, support for a [driver](#drivers) or [data access framework](#data-access-frameworks-e-g-orms) does not include [automatic, client-side transaction retry handling](transactions.html#client-side-intervention). For client-side transaction retry handling samples, see [Example Apps](example-apps.html). -{{site.data.alerts.end}} - -If you encounter problems using CockroachDB with any of the tools listed on this page, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward better support. - -For a list of tools supported by the CockroachDB community, see [Third-Party Tools Supported by the Community](community-tooling.html). - -## Drivers - -| Language | Driver | Latest tested version | Support level | CockroachDB adapter | Tutorial | -|----------+--------+-----------------------+---------------------+---------------------+----------| -| C | [libpq](http://www.postgresql.org/docs/13/static/libpq.html)| PostgreSQL 13 | Beta | N/A | N/A | -| C# (.NET) | [Npgsql](https://www.nuget.org/packages/Npgsql/) | 4.1.3.1 | Beta | N/A | [Build a C# App with CockroachDB (Npgsql)](build-a-csharp-app-with-cockroachdb.html) | -| Go | [pgx](https://github.com/jackc/pgx/releases)


[pq](https://github.com/lib/pq) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/pgx.go ||var supportedPGXTag = "||"\n\n %}
(use latest version of CockroachDB adapter)
{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/libpq.go ||var libPQSupportedTag = "||"\n\n %} | Full


Full | [`crdbpgx`](https://pkg.go.dev/github.com/cockroachdb/cockroach-go/crdb/crdbpgx)
(includes client-side transaction retry handling)
N/A | [Build a Go App with CockroachDB (pgx)](build-a-go-app-with-cockroachdb.html)


[Build a Go App with CockroachDB (pq)](build-a-go-app-with-cockroachdb-pq.html) | -| Java | [JDBC](https://jdbc.postgresql.org/download/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/pgjdbc.go ||var supportedPGJDBCTag = "||"\n\n %} | Full | N/A | [Build a Java App with CockroachDB (JDBC)](build-a-java-app-with-cockroachdb.html) | -| JavaScript | [pg](https://www.npmjs.com/package/pg) | 8.2.1 | Full | N/A | [Build a Node.js App with CockroachDB (pg)](build-a-nodejs-app-with-cockroachdb.html) | -| Python | [psycopg2](https://www.psycopg.org/docs/install.html) | 2.8.6 | Full | N/A | [Build a Python App with CockroachDB (psycopg2)](build-a-python-app-with-cockroachdb.html) | -| Ruby | [pg](https://rubygems.org/gems/pg) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/ruby_pg.go ||var rubyPGVersion = "||"\n\n %} | Full | N/A | [Build a Ruby App with CockroachDB (pg)](build-a-ruby-app-with-cockroachdb.html) | -| Rust | [rust-postgres](https://github.com/sfackler/rust-postgres) | 0.19.2 | Beta | N/A | [Build a Rust App with CockroachDB](build-a-rust-app-with-cockroachdb.html) | - -## Data access frameworks (e.g., ORMs) - -| Language | Framework | Latest tested version | Support level | CockroachDB adapter | Tutorial | -|----------+-----------+-----------------------+---------------+---------------------+----------| -| Go | [GORM](https://github.com/jinzhu/gorm/releases)


[go-pg](https://github.com/go-pg/pg)
[upper/db](https://github.com/upper/db) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/gorm.go ||var gormSupportedTag = "||"\n\n %}


{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/gopg.go ||var gopgSupportedTag = "||"\n\n %}
v4 | Full


Full
Full | [`crdbgorm`](https://pkg.go.dev/github.com/cockroachdb/cockroach-go/crdb/crdbgorm)
(includes client-side transaction retry handling)
N/A
N/A | [Build a Go App with CockroachDB (GORM)](build-a-go-app-with-cockroachdb-gorm.html)


N/A
[Build a Go App with CockroachDB (upper/db)](build-a-go-app-with-cockroachdb-upperdb.html) | -| Java | [Hibernate](https://hibernate.org/orm/)
(including [Hibernate Spatial](https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#spatial))
[jOOQ](https://www.jooq.org/)
[MyBatis](https://mybatis.org/mybatis-3/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/hibernate.go ||var supportedHibernateTag = "||"\n\n %} (must be 5.4.19)


3.13.2 (must be 3.13.0)
3.5.5| Full


Full
Full | N/A


N/A
N/A | [Build a Java App with CockroachDB (Hibernate)](build-a-java-app-with-cockroachdb-hibernate.html)


[Build a Java App with CockroachDB (jOOQ)](build-a-java-app-with-cockroachdb-jooq.html)
[Build a Spring App with CockroachDB (MyBatis)](build-a-spring-app-with-cockroachdb-mybatis.html) | -| JavaScript/TypeScript | [Sequelize](https://www.npmjs.com/package/sequelize)


[Knex.js](https://knexjs.org/)
[Prisma](https://prisma.io)
[TypeORM](https://www.npmjs.com/package/typeorm) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/sequelize.go ||var supportedSequelizeCockroachDBRelease = "||"\n\n %}
(use latest version of CockroachDB adapter)
{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/knex.go ||const supportedKnexTag = "||"\n\n %}
3.9.0
0.3.17 {% comment %}{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/typeorm.go ||const supportedTypeORMRelease = "||"\n %}{% endcomment %} | Full


Full
Full
Full | [`sequelize-cockroachdb`](https://www.npmjs.com/package/sequelize-cockroachdb)


N/A
N/A
N/A | [Build a Node.js App with CockroachDB (Sequelize)](build-a-nodejs-app-with-cockroachdb-sequelize.html)


[Build a Node.js App with CockroachDB (Knex.js)](build-a-nodejs-app-with-cockroachdb-knexjs.html)
[Build a Node.js App with CockroachDB (Prisma)](build-a-nodejs-app-with-cockroachdb-prisma.html)
[Build a TypeScript App with CockroachDB (TypeORM)](build-a-typescript-app-with-cockroachdb.html) | -| Ruby | [ActiveRecord](https://rubygems.org/gems/activerecord)
[RGeo/RGeo-ActiveRecord](https://github.com/cockroachdb/activerecord-cockroachdb-adapter#working-with-spatial-data) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/activerecord.go ||var supportedRailsVersion = "||"\nvar %}
(use latest version of CockroachDB adapter) | Full | [`activerecord-cockroachdb-adapter`](https://rubygems.org/gems/activerecord-cockroachdb-adapter)
(includes client-side transaction retry handling) | [Build a Ruby App with CockroachDB (ActiveRecord)](build-a-ruby-app-with-cockroachdb-activerecord.html) | -| Python | [Django](https://pypi.org/project/Django/)
(including [GeoDjango](https://docs.djangoproject.com/en/3.1/ref/contrib/gis/))
[peewee](https://github.com/coleifer/peewee/)
[SQLAlchemy](https://www.sqlalchemy.org/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-21.2/pkg/cmd/roachtest/tests/django.go ||var djangoSupportedTag = "cockroach-||"\nvar %}
(use latest version of CockroachDB adapter)

3.13.3
0.7.13
1.4.17
(use latest version of CockroachDB adapter) | Full


Full
Full
Full | [`django-cockroachdb`](https://pypi.org/project/django-cockroachdb/)


N/A
N/A
[`sqlalchemy-cockroachdb`](https://pypi.org/project/sqlalchemy-cockroachdb)
(includes client-side transaction retry handling) | [Build a Python App with CockroachDB (Django)](build-a-python-app-with-cockroachdb-django.html)


N/A (See [peewee docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#cockroach-database).)
[Build a Python App with CockroachDB (SQLAlchemy)](build-a-python-app-with-cockroachdb-sqlalchemy.html) | - -## Application frameworks - -| Framework | Data access | Latest tested version | Support level | Tutorial | -|-----------+-------------+-----------------------+---------------+----------| -| Spring | [JDBC](build-a-spring-app-with-cockroachdb-jdbc.html)
[JPA (Hibernate)](build-a-spring-app-with-cockroachdb-jpa.html)
[MyBatis](build-a-spring-app-with-cockroachdb-mybatis.html) | See individual Java ORM or [driver](#drivers) for data access version support. | See individual Java ORM or [driver](#drivers) for data access support level. | [Build a Spring App with CockroachDB (JDBC)](build-a-spring-app-with-cockroachdb-jdbc.html)
[Build a Spring App with CockroachDB (JPA)](build-a-spring-app-with-cockroachdb-jpa.html)
[Build a Spring App with CockroachDB (MyBatis)](build-a-spring-app-with-cockroachdb-mybatis.html) - -## Graphical user interfaces (GUIs) - -| GUI | Latest tested version | Support level | Tutorial | -|-----+-----------------------+---------------+----------| -| [DBeaver](https://dbeaver.com/) | 5.2.3 | Full | [Visualize CockroachDB Schemas with DBeaver](dbeaver.html) - -## Integrated development environments (IDEs) - -| IDE | Latest tested version | Support level | Tutorial | -|-----+-----------------------+---------------+----------| -| [DataGrip](https://www.jetbrains.com/datagrip/) | 2021.1 | Full | N/A -| [IntelliJ IDEA](https://www.jetbrains.com/idea/) | 2021.1 | Full | [Use IntelliJ IDEA with CockroachDB](intellij-idea.html) - -## Schema migration tools - -| Tool | Latest tested version | Support level | Tutorial | -|-----+------------------------+----------------+----------| -| [Alembic](https://alembic.sqlalchemy.org/en/latest/) | 1.7 | Full | [Migrate CockroachDB Schemas with Alembic](alembic.html) -| [Flyway](https://flywaydb.org/documentation/commandline/#download-and-installation) | 7.1.0 | Full | [Migrate CockroachDB Schemas with Flyway](flyway.html) -| [Liquibase](https://www.liquibase.org/download) | 4.2.0 | Full | [Migrate CockroachDB Schemas with Liquibase](liquibase.html) - -## Other tools - -| Tool | Latest tested version | Support level | Tutorial | -|-----+------------------------+---------------+----------| -| [Flowable](https://github.com/flowable/flowable-engine) | 6.4.2 | Full | [Getting Started with Flowable and CockroachDB (external)](https://blog.flowable.org/2019/07/11/getting-started-with-flowable-and-cockroachdb/) diff --git a/src/current/_includes/v21.2/misc/userfile.md b/src/current/_includes/v21.2/misc/userfile.md deleted file mode 100644 index 1a23d5d2c39..00000000000 --- a/src/current/_includes/v21.2/misc/userfile.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} - CockroachDB now supports uploading files to a [user-scoped file storage](use-userfile-for-bulk-operations.html) using a SQL connection. We recommend using `userfile` instead of `nodelocal`, as it is user-scoped and more secure. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/orchestration/apply-custom-resource.md b/src/current/_includes/v21.2/orchestration/apply-custom-resource.md deleted file mode 100644 index e7aacf41a1e..00000000000 --- a/src/current/_includes/v21.2/orchestration/apply-custom-resource.md +++ /dev/null @@ -1,6 +0,0 @@ -Apply the new settings to the cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ kubectl apply -f example.yaml -~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/apply-helm-values.md b/src/current/_includes/v21.2/orchestration/apply-helm-values.md deleted file mode 100644 index 90f9c8783f8..00000000000 --- a/src/current/_includes/v21.2/orchestration/apply-helm-values.md +++ /dev/null @@ -1,6 +0,0 @@ -Apply the custom values to override the default Helm chart [values](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ helm upgrade {release-name} --values {custom-values}.yaml cockroachdb/cockroachdb -~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/apply-statefulset-manifest.md b/src/current/_includes/v21.2/orchestration/apply-statefulset-manifest.md deleted file mode 100644 index 0236903c497..00000000000 --- a/src/current/_includes/v21.2/orchestration/apply-statefulset-manifest.md +++ /dev/null @@ -1,6 +0,0 @@ -Apply the new settings to the cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ kubectl apply -f {statefulset-manifest}.yaml -~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-basic-sql.md b/src/current/_includes/v21.2/orchestration/kubernetes-basic-sql.md deleted file mode 100644 index f7cfbd76641..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-basic-sql.md +++ /dev/null @@ -1,44 +0,0 @@ -1. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts VALUES (1, 1000.50); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +----+---------+ - 1 | 1000.50 - (1 row) - ~~~ - -1. [Create a user with a password](create-user.html#create-a-user-with-a-password): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS'; - ~~~ - - You will need this username and password to access the DB Console later. - -1. Exit the SQL shell and pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-cockroach-cert.md b/src/current/_includes/v21.2/orchestration/kubernetes-cockroach-cert.md deleted file mode 100644 index ff44cf183a4..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-cockroach-cert.md +++ /dev/null @@ -1,90 +0,0 @@ -{{site.data.alerts.callout_info}} -The below steps use [`cockroach cert` commands](cockroach-cert.html) to quickly generate and sign the CockroachDB node and client certificates. Read our [Authentication](authentication.html#using-digital-certificates-with-cockroachdb) docs to learn about other methods of signing certificates. -{{site.data.alerts.end}} - -1. Create two directories: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir certs my-safe-directory - ~~~ - - Directory | Description - ----------|------------ - `certs` | You'll generate your CA certificate and all node and client certificates and keys in this directory. - `my-safe-directory` | You'll generate your CA key in this directory and then reference the key when generating node and client certificates. - -1. Create the CA certificate and key pair: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -1. Create a client certificate and key pair for the root user: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -1. Upload the client certificate and key to the Kubernetes cluster as a secret: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create secret \ - generic cockroachdb.client.root \ - --from-file=certs - ~~~ - - ~~~ - secret/cockroachdb.client.root created - ~~~ - -1. Create the certificate and key pair for your CockroachDB nodes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - localhost 127.0.0.1 \ - cockroachdb-public \ - cockroachdb-public.default \ - cockroachdb-public.default.svc.cluster.local \ - *.cockroachdb \ - *.cockroachdb.default \ - *.cockroachdb.default.svc.cluster.local \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -1. Upload the node certificate and key to the Kubernetes cluster as a secret: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create secret \ - generic cockroachdb.node \ - --from-file=certs - ~~~ - - ~~~ - secret/cockroachdb.node created - ~~~ - -1. Check that the secrets were created on the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get secrets - ~~~ - - ~~~ - NAME TYPE DATA AGE - cockroachdb.client.root Opaque 3 41m - cockroachdb.node Opaque 5 14s - default-token-6qjdb kubernetes.io/service-account-token 3 4m - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-expand-disk-helm.md b/src/current/_includes/v21.2/orchestration/kubernetes-expand-disk-helm.md deleted file mode 100644 index 4ec3d2f171f..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-expand-disk-helm.md +++ /dev/null @@ -1,118 +0,0 @@ -You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes -) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims. - -{{site.data.alerts.callout_info}} -These steps assume you followed the tutorial [Deploy CockroachDB on Kubernetes](deploy-cockroachdb-with-kubernetes.html?filters=helm). -{{site.data.alerts.end}} - -1. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe storageclass standard - ~~~ - - ~~~ - Name: standard - IsDefaultClass: Yes - Annotations: storageclass.kubernetes.io/is-default-class=true - Provisioner: kubernetes.io/gce-pd - Parameters: type=pd-standard - AllowVolumeExpansion: False - MountOptions: - ReclaimPolicy: Delete - VolumeBindingMode: Immediate - Events: - ~~~ - - If necessary, edit the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}' - ~~~ - - ~~~ - storageclass.storage.k8s.io/standard patched - ~~~ - -1. Edit one of the persistent volume claims to request more space: - - {{site.data.alerts.callout_info}} - The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch pvc datadir-my-release-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}' - ~~~ - - ~~~ - persistentvolumeclaim/datadir-my-release-cockroachdb-0 patched - ~~~ - -1. Check the capacity of the persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m - ~~~ - - If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity. - - {{site.data.alerts.callout_success}} - Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`. - {{site.data.alerts.end}} - -1. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - Waiting for user to (re-)start a pod to finish file system resize of volume on node. - ~~~ - -1. Delete the corresponding pod to restart it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod my-release-cockroachdb-0 - ~~~ - - The `FileSystemResizePending` condition and message will be removed. - -1. View the updated persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m - ~~~ - -1. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount. \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-expand-disk-manual.md b/src/current/_includes/v21.2/orchestration/kubernetes-expand-disk-manual.md deleted file mode 100644 index e6cf4bbbddb..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-expand-disk-manual.md +++ /dev/null @@ -1,118 +0,0 @@ -You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes -) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims. - -{{site.data.alerts.callout_info}} -These steps assume you followed the tutorial [Deploy CockroachDB on Kubernetes](deploy-cockroachdb-with-kubernetes.html?filters=manual). -{{site.data.alerts.end}} - -1. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe storageclass standard - ~~~ - - ~~~ - Name: standard - IsDefaultClass: Yes - Annotations: storageclass.kubernetes.io/is-default-class=true - Provisioner: kubernetes.io/gce-pd - Parameters: type=pd-standard - AllowVolumeExpansion: False - MountOptions: - ReclaimPolicy: Delete - VolumeBindingMode: Immediate - Events: - ~~~ - - If necessary, edit the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}' - ~~~ - - ~~~ - storageclass.storage.k8s.io/standard patched - ~~~ - -1. Edit one of the persistent volume claims to request more space: - - {{site.data.alerts.callout_info}} - The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch pvc datadir-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}' - ~~~ - - ~~~ - persistentvolumeclaim/datadir-cockroachdb-0 patched - ~~~ - -1. Check the capacity of the persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m - ~~~ - - If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity. - - {{site.data.alerts.callout_success}} - Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`. - {{site.data.alerts.end}} - -1. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - Waiting for user to (re-)start a pod to finish file system resize of volume on node. - ~~~ - -1. Delete the corresponding pod to restart it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-0 - ~~~ - - The `FileSystemResizePending` condition and message will be removed. - -1. View the updated persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m - ~~~ - -1. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount. \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-limitations.md b/src/current/_includes/v21.2/orchestration/kubernetes-limitations.md deleted file mode 100644 index 29d574197a1..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-limitations.md +++ /dev/null @@ -1,35 +0,0 @@ -#### Kubernetes version - -To deploy CockroachDB {{page.version.version}}, Kubernetes 1.18 or higher is required. Cockroach Labs strongly recommends that you use a Kubernetes version that is [eligible for patch support by the Kubernetes project](https://kubernetes.io/releases/). - -#### Kubernetes Operator - -The CockroachDB Kubernetes Operator currently deploys clusters in a single region. For multi-region deployments using manual configs, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters](orchestrate-cockroachdb-with-kubernetes-multi-cluster.html). - -#### Helm version - -The CockroachDB Helm chart requires Helm 3.0 or higher. If you attempt to use an incompatible Helm version, an error like the following occurs: - -~~~ shell -Error: UPGRADE FAILED: template: cockroachdb/templates/tests/client.yaml:6:14: executing "cockroachdb/templates/tests/client.yaml" at <.Values.networkPolicy.enabled>: nil pointer evaluating interface {}.enabled -~~~ - -The CockroachDB Helm chart is compatible with Kubernetes versions 1.22 and earlier. - -The CockroachDB Helm chart is currently not under active development, and no new features are planned. However, Cockroach Labs remains committed to fully supporting the Helm chart by addressing defects, providing security patches, and addressing breaking changes due to deprecations in Kubernetes APIs. - -A deprecation notice for the Helm chart will be provided to customers a minimum of 6 months in advance of actual deprecation. - -#### Network - -Service Name Indication (SNI) is an extension to the TLS protocol which allows a client to indicate which hostname it is attempting to connect to at the start of the TCP handshake process. The server can present multiple certificates on the same IP address and TCP port number, and one server can serve multiple secure websites or API services even if they use different certificates. - -Due to its order of operations, the PostgreSQL wire protocol's implementation of TLS is not compatible with SNI-based routing in the Kubernetes ingress controller. Instead, use a TCP load balancer for CockroachDB that is not shared with other services. - -#### Resources - -When starting Kubernetes, select machines with at least **4 vCPUs** and **16 GiB** of memory, and provision at least **2 vCPUs** and **8 Gi** of memory to CockroachDB per pod. These minimum settings are used by default in this deployment guide, and are appropriate for testing purposes only. On a production deployment, you should adjust the resource settings for your workload. For details, see [Resource management](configure-cockroachdb-kubernetes.html#memory-and-cpu). - -#### Storage - -At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-remove-nodes-helm.md b/src/current/_includes/v21.2/orchestration/kubernetes-remove-nodes-helm.md deleted file mode 100644 index cbb34893aad..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-remove-nodes-helm.md +++ /dev/null @@ -1,126 +0,0 @@ -Before removing a node from your cluster, you must first decommission the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Prepare for graceful shutdown](node-shutdown.html?filters=decommission#prepare-for-graceful-shutdown). -{{site.data.alerts.end}} - -1. Use the [`cockroach node status`](cockroach-node.html) command to get the internal IDs of nodes. For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node status \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - - The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. - -1. Use the [`cockroach node decommission`](cockroach-node.html) command to decommission the node with the highest number in its address, specifying its ID (in this example, node ID `4` because its address is `my-release-cockroachdb-3`): - - {{site.data.alerts.callout_info}} - You must decommission the node with the highest number in its address. Kubernetes will remove the pod for the node with the highest number in its address when you reduce the replica count. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node decommission 4 \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 73 | true | decommissioning | false - ~~~ - - Once the node has been fully decommissioned, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 0 | true | decommissioning | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -1. Once the node has been decommissioned, scale down your StatefulSet: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.replicas=3 \ - --reuse-values - ~~~ - -1. Verify that the pod was successfully removed: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 51m - my-release-cockroachdb-1 1/1 Running 0 47m - my-release-cockroachdb-2 1/1 Running 0 3m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You should also remove the persistent volume that was mounted to the pod. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-3 Bound pvc-75e561ba-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. Verify that the PVC with the highest number in its name is no longer mounted to a pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-my-release-cockroachdb-3 - ~~~ - - ~~~ - Name: datadir-my-release-cockroachdb-3 - ... - Mounted By: - ~~~ - -1. Remove the persistent volume by deleting the PVC: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pvc datadir-my-release-cockroachdb-3 - ~~~ - - ~~~ - persistentvolumeclaim "datadir-my-release-cockroachdb-3" deleted - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-remove-nodes-insecure.md b/src/current/_includes/v21.2/orchestration/kubernetes-remove-nodes-insecure.md deleted file mode 100644 index e359b3b2d8d..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-remove-nodes-insecure.md +++ /dev/null @@ -1,129 +0,0 @@ -To safely remove a node from your cluster, you must first decommission the node and only then adjust the `spec.replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Prepare for graceful shutdown](node-shutdown.html?filters=decommission#prepare-for-graceful-shutdown). -{{site.data.alerts.end}} - -1. Launch a temporary interactive pod and use the `cockroach node status` command to get the internal IDs of nodes: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node status \ - --insecure \ - --host=cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node status \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ -
- -2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](cockroach-node.html) command to decommission it: - - {{site.data.alerts.callout_info}} - It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node. - {{site.data.alerts.end}} - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node decommission \ - --insecure \ - --host=cockroachdb-public - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node decommission \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ -
- - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 73 | true | decommissioning | false - ~~~ - - Once the node has been fully decommissioned, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 0 | true | decommissioning | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -3. Once the node has been decommissioned, remove a pod from your StatefulSet: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=3 - ~~~ - - ~~~ - statefulset "cockroachdb" scaled - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.replicas=3 \ - --reuse-values - ~~~ -
diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-remove-nodes-manual.md b/src/current/_includes/v21.2/orchestration/kubernetes-remove-nodes-manual.md deleted file mode 100644 index c8cc789567b..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-remove-nodes-manual.md +++ /dev/null @@ -1,126 +0,0 @@ -Before removing a node from your cluster, you must first decommission the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Prepare for graceful shutdown](node-shutdown.html?filters=decommission#prepare-for-graceful-shutdown). -{{site.data.alerts.end}} - -1. Use the [`cockroach node status`](cockroach-node.html) command to get the internal IDs of nodes. For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node status \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - - The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. - -1. Use the [`cockroach node decommission`](cockroach-node.html) command to decommission the node with the highest number in its address, specifying its ID (in this example, node ID `4` because its address is `cockroachdb-3`): - - {{site.data.alerts.callout_info}} - You must decommission the node with the highest number in its address. Kubernetes will remove the pod for the node with the highest number in its address when you reduce the replica count. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node decommission 4 \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 73 | true | decommissioning | false - ~~~ - - Once the node has been fully decommissioned, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | membership | is_draining - -----+---------+----------+--------------------+-----------------+-------------- - 4 | true | 0 | true | decommissioning | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -1. Once the node has been decommissioned, scale down your StatefulSet: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=3 - ~~~ - - ~~~ - statefulset.apps/cockroachdb scaled - ~~~ - -1. Verify that the pod was successfully removed: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 51m - cockroachdb-1 1/1 Running 0 47m - cockroachdb-2 1/1 Running 0 3m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You should also remove the persistent volume that was mounted to the pod. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-3 Bound pvc-75e561ba-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. Verify that the PVC with the highest number in its name is no longer mounted to a pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-cockroachdb-3 - ~~~ - - ~~~ - Name: datadir-cockroachdb-3 - ... - Mounted By: - ~~~ - -1. Remove the persistent volume by deleting the PVC: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pvc datadir-cockroachdb-3 - ~~~ - - ~~~ - persistentvolumeclaim "datadir-cockroachdb-3" deleted - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-scale-cluster-helm.md b/src/current/_includes/v21.2/orchestration/kubernetes-scale-cluster-helm.md deleted file mode 100644 index 8556b822651..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-scale-cluster-helm.md +++ /dev/null @@ -1,118 +0,0 @@ -Before scaling CockroachDB, ensure that your Kubernetes cluster has enough worker nodes to host the number of pods you want to add. This is to ensure that two pods are not placed on the same worker node, as recommended in our [production guidance](recommended-production-settings.html#topology). - -For example, if you want to scale from 3 CockroachDB nodes to 4, your Kubernetes cluster should have at least 4 worker nodes. You can verify the size of your Kubernetes cluster by running `kubectl get nodes`. - -1. Edit your StatefulSet configuration to add another pod for the new CockroachDB node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.replicas=4 \ - --reuse-values - ~~~ - - ~~~ - Release "my-release" has been upgraded. Happy Helming! - LAST DEPLOYED: Tue May 14 14:06:43 2019 - NAMESPACE: default - STATUS: DEPLOYED - - RESOURCES: - ==> v1beta1/PodDisruptionBudget - NAME AGE - my-release-cockroachdb-budget 51m - - ==> v1/Pod(related) - - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 38m - my-release-cockroachdb-1 1/1 Running 0 39m - my-release-cockroachdb-2 1/1 Running 0 39m - my-release-cockroachdb-3 0/1 Pending 0 0s - my-release-cockroachdb-init-nwjkh 0/1 Completed 0 39m - - ... - ~~~ - -1. Get the name of the `Pending` CSR for the new pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.client.root 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-3 2m system:serviceaccount:default:default Pending - node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued - node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued - node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued - ... - ~~~ - - If you do not see a `Pending` CSR, wait a minute and try again. - -1. Examine the CSR for the new pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe csr default.node.my-release-cockroachdb-3 - ~~~ - - ~~~ - Name: default.node.my-release-cockroachdb-3 - Labels: - Annotations: - CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500 - Requesting User: system:serviceaccount:default:default - Status: Pending - Subject: - Common Name: node - Serial Number: - Organization: Cockroach - Subject Alternative Names: - DNS Names: localhost - my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local - my-release-cockroachdb-1.my-release-cockroachdb - my-release-cockroachdb-public - my-release-cockroachdb-public.default.svc.cluster.local - IP Addresses: 127.0.0.1 - 10.48.1.6 - Events: - ~~~ - -1. If everything looks correct, approve the CSR for the new pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl certificate approve default.node.my-release-cockroachdb-3 - ~~~ - - ~~~ - certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-3 approved - ~~~ - -1. Verify that the new pod started successfully: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 51m - my-release-cockroachdb-1 1/1 Running 0 47m - my-release-cockroachdb-2 1/1 Running 0 3m - my-release-cockroachdb-3 1/1 Running 0 1m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You can also open the [**Node List**](ui-cluster-overview-page.html#node-list) in the DB Console to ensure that the fourth node successfully joined the cluster. \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-scale-cluster-manual.md b/src/current/_includes/v21.2/orchestration/kubernetes-scale-cluster-manual.md deleted file mode 100644 index f42775704d3..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-scale-cluster-manual.md +++ /dev/null @@ -1,51 +0,0 @@ -Before scaling up CockroachDB, note the following [topology recommendations](recommended-production-settings.html#topology): - -- Each CockroachDB node (running in its own pod) should run on a separate Kubernetes worker node. -- Each availability zone should have the same number of CockroachDB nodes. - -If your cluster has 3 CockroachDB nodes distributed across 3 availability zones (as in our [deployment example](deploy-cockroachdb-with-kubernetes.html?filters=manual)), we recommend scaling up by a multiple of 3 to retain an even distribution of nodes. You should therefore scale up to a minimum of 6 CockroachDB nodes, with 2 nodes in each zone. - -1. Run `kubectl get nodes` to list the worker nodes in your Kubernetes cluster. There should be at least as many worker nodes as pods you plan to add. This ensures that no more than one pod will be placed on each worker node. - -1. Add worker nodes if necessary: - - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster). If you deployed a [regional cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster) as we recommended, you will use `--num-nodes` to specify the desired number of worker nodes in each zone. For example: - - {% include_cached copy-clipboard.html %} - ~~~ shell - gcloud container clusters resize {cluster-name} --region {region-name} --num-nodes 2 - ~~~ - - On EKS, resize your [Worker Node Group](https://eksctl.io/usage/managing-nodegroups/#scaling). - - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/). - - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html). - -1. Edit your StatefulSet configuration to add pods for each new CockroachDB node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=6 - ~~~ - - ~~~ - statefulset.apps/cockroachdb scaled - ~~~ - -1. Verify that the new pod started successfully: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 51m - cockroachdb-1 1/1 Running 0 47m - cockroachdb-2 1/1 Running 0 3m - cockroachdb-3 1/1 Running 0 1m - cockroachdb-4 1/1 Running 0 1m - cockroachdb-5 1/1 Running 0 1m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You can also open the [**Node List**](ui-cluster-overview-page.html#node-list) in the DB Console to ensure that the fourth node successfully joined the cluster. \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-simulate-failure.md b/src/current/_includes/v21.2/orchestration/kubernetes-simulate-failure.md deleted file mode 100644 index 5738885935e..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-simulate-failure.md +++ /dev/null @@ -1,79 +0,0 @@ -Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage. - -To see this in action: - -1. Terminate one of the CockroachDB nodes: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-2 - ~~~ - - ~~~ - pod "cockroachdb-2" deleted - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-2 - ~~~ - - ~~~ - pod "cockroachdb-2" deleted - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod my-release-cockroachdb-2 - ~~~ - - ~~~ - pod "my-release-cockroachdb-2" deleted - ~~~ -
- - -2. In the DB Console, the **Cluster Overview** will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy. - -3. Back in the terminal, verify that the pod was automatically restarted: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pod cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-2 1/1 Running 0 12s - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pod cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-2 1/1 Running 0 12s - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pod my-release-cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-2 1/1 Running 0 44s - ~~~ -
diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-stop-cluster.md b/src/current/_includes/v21.2/orchestration/kubernetes-stop-cluster.md deleted file mode 100644 index afc17479b82..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-stop-cluster.md +++ /dev/null @@ -1,145 +0,0 @@ -To shut down the CockroachDB cluster: - -
-{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %} - -1. Delete the previously created custom resource: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete -f example.yaml - ~~~ - -1. Remove the Operator: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml - ~~~ - - This will delete the CockroachDB cluster being run by the Operator. It will *not* delete the persistent volumes that were attached to the pods. - - {{site.data.alerts.callout_danger}} - If you want to delete the persistent volumes and free up the storage used by CockroachDB, be sure you have a backup copy of your data. Data **cannot** be recovered once the persistent volumes are deleted. For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/#persistent-volumes). - {{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl). -{{site.data.alerts.end}} -
- -
-1. Delete the resources associated with the `cockroachdb` label, including the logs and Prometheus and Alertmanager resources: - - {{site.data.alerts.callout_danger}} - This does not include deleting the persistent volumes that were attached to the pods. If you want to delete the persistent volumes and free up the storage used by CockroachDB, be sure you have a backup copy of your data. Data **cannot** be recovered once the persistent volumes are deleted. For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/#persistent-volumes). - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pods,statefulsets,services,poddisruptionbudget,jobs,rolebinding,clusterrolebinding,role,clusterrole,serviceaccount,alertmanager,prometheus,prometheusrule,serviceMonitor -l app=cockroachdb - ~~~ - - ~~~ - pod "cockroachdb-0" deleted - pod "cockroachdb-1" deleted - pod "cockroachdb-2" deleted - statefulset.apps "alertmanager-cockroachdb" deleted - statefulset.apps "prometheus-cockroachdb" deleted - service "alertmanager-cockroachdb" deleted - service "cockroachdb" deleted - service "cockroachdb-public" deleted - poddisruptionbudget.policy "cockroachdb-budget" deleted - job.batch "cluster-init-secure" deleted - rolebinding.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrolebinding.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrolebinding.rbac.authorization.k8s.io "prometheus" deleted - role.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrole.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrole.rbac.authorization.k8s.io "prometheus" deleted - serviceaccount "cockroachdb" deleted - serviceaccount "prometheus" deleted - alertmanager.monitoring.coreos.com "cockroachdb" deleted - prometheus.monitoring.coreos.com "cockroachdb" deleted - prometheusrule.monitoring.coreos.com "prometheus-cockroachdb-rules" deleted - servicemonitor.monitoring.coreos.com "cockroachdb" deleted - ~~~ - -1. Delete the pod created for `cockroach` client commands, if you didn't do so earlier: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-client-secure - ~~~ - - ~~~ - pod "cockroachdb-client-secure" deleted - ~~~ - -{{site.data.alerts.callout_info}} -This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl). -{{site.data.alerts.end}} -
- -
-1. Uninstall the release: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm uninstall my-release - ~~~ - - ~~~ - release "my-release" deleted - ~~~ - -1. Delete the pod created for `cockroach` client commands, if you didn't do so earlier: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-client-secure - ~~~ - - ~~~ - pod "cockroachdb-client-secure" deleted - ~~~ - -1. Get the names of any CSRs for the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.client.root 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-3 12m system:serviceaccount:default:default Approved,Issued - node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued - node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued - node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued - ... - ~~~ - -1. Delete any CSRs that you created: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete csr default.client.root default.node.my-release-cockroachdb-0 default.node.my-release-cockroachdb-1 default.node.my-release-cockroachdb-2 default.node.my-release-cockroachdb-3 - ~~~ - - ~~~ - certificatesigningrequest "default.client.root" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-0" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-1" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-2" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-3" deleted - ~~~ - - {{site.data.alerts.callout_info}} - This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl). - {{site.data.alerts.end}} -
diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-upgrade-cluster-helm.md b/src/current/_includes/v21.2/orchestration/kubernetes-upgrade-cluster-helm.md deleted file mode 100644 index dc94bdfa191..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-upgrade-cluster-helm.md +++ /dev/null @@ -1,253 +0,0 @@ -1. Verify that you can upgrade. - - To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release](../releases/index.html) and not a testing release (alpha/beta). - - Therefore, in order to upgrade to v21.2, you must be on a production release of v21.1. - - 1. If you are upgrading to v21.2 from a production release earlier than v21.1, or from a testing release (alpha/beta), first [upgrade to a production release of v21.1](../v21.1/operate-cockroachdb-kubernetes.html?filters=helm#upgrade-the-cluster). Be sure to complete all the steps. - - 1. Then return to this page and perform a second upgrade to v21.2. - - 1. If you are upgrading from any production release of v21.1, or from any earlier v21.2 patch release, you do not have to go through intermediate releases; continue to step 2. - -1. Verify the overall health of your cluster using the [DB Console](ui-overview.html). On the **Overview**: - - Under **Node Status**, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or [decommission](scale-cockroachdb-kubernetes.html?filters=helm#remove-nodes) them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually). - - Under **Replication Status**, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to [identify and resolve the cause of range under-replication and/or unavailability](cluster-setup-troubleshooting.html#replication-issues) before beginning your upgrade. - - In the **Node List**: - - Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over. - - Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to **Metrics > Dashboard: Hardware** and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider [adding nodes](scale-cockroachdb-kubernetes.html?filters=helm#add-nodes) to your cluster before beginning your upgrade. - -1. Review the [backward-incompatible changes in v21.2](../releases/v21.2.html#v21-2-0-backward-incompatible-changes) and [deprecated features](../releases/v21.2.html#v21-2-0-deprecations). If any affect your deployment, make the necessary changes before starting the rolling upgrade to v21.2. - -1. Decide how the upgrade will be finalized. - - By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain [features and performance improvements introduced in v21.2](upgrade-cockroach-version.html#features-that-require-upgrade-finalization). After finalization, however, it will no longer be possible to perform a downgrade to v21.1. In the event of a catastrophic failure or corruption, the only option is to start a new cluster using the old binary and then restore from a [backup](take-full-and-incremental-backups.html) created prior to the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in a later step. - - {{site.data.alerts.callout_info}} - Finalization only applies when performing a major version upgrade (for example, from v21.1.x to v21.2). Patch version upgrades (for example, within the v21.2.x series) can always be downgraded. - {{site.data.alerts.end}} - - {% if page.secure == true %} - - 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - - {% endif %} - - 1. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html) to the version you are upgrading from: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.preserve_downgrade_option = '21.1'; - ~~~ - - 1. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Add a [partition](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update) to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., `cockroachdb-0`, `cockroachdb-1`, `cockroachdb-2`) the partition value should be 2: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.updateStrategy.rollingUpdate.partition=2 - ~~~ - -1. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet: - - {{site.data.alerts.callout_info}} - For Helm, you must remove the cluster initialization job from when the cluster was created before the cluster version can be changed. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete job my-release-cockroachdb-init - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set image.tag={{page.release_info.version}} \ - --reuse-values - ~~~ - -1. Check the status of your cluster's pods. You should see one of them being restarted: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 2m - my-release-cockroachdb-1 1/1 Running 0 3m - my-release-cockroachdb-2 0/1 ContainerCreating 0 25s - my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s - ... - ~~~ - - {{site.data.alerts.callout_info}} - Ignore the pod for cluster initialization. It is re-created as a byproduct of the StatefulSet configuration but does not impact your existing cluster. - {{site.data.alerts.end}} - -1. After the pod has been restarted with the new image, start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% if page.secure == true %} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - {% else %} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - {% endif %} - -1. Run the following SQL query to verify that the number of underreplicated ranges is zero: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status; - ~~~ - - ~~~ - ranges_underreplicated - -------------------------- - 0 - (1 row) - ~~~ - - This indicates that it is safe to proceed to the next pod. - -1. Exit the SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Decrement the partition value by 1 to allow the next pod in the cluster to update: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.updateStrategy.rollingUpdate.partition=1 \ - ~~~ - -1. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be `0`). - -1. Check the image of each pod to confirm that all have been upgraded: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods \ - -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' - ~~~ - - ~~~ - my-release-cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}} - my-release-cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}} - my-release-cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}} - ... - ~~~ - - You can also check the CockroachDB version of each node in the [DB Console](ui-cluster-overview-page.html#node-details). - - -1. If you disabled auto-finalization earlier, monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day). - - If you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary. - - {{site.data.alerts.callout_info}} - This is only possible when performing a major version upgrade (for example, from v21.1.x to v21.2). Patch version upgrades (for example, within the v21.2.x series) are auto-finalized. - {{site.data.alerts.end}} - - To finalize the upgrade, re-enable auto-finalization: - - {% if page.secure == true %} - - 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - - {% endif %} - - 1. Re-enable auto-finalization: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > RESET CLUSTER SETTING cluster.preserve_downgrade_option; - ~~~ - - 1. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v21.2/orchestration/kubernetes-upgrade-cluster-manual.md b/src/current/_includes/v21.2/orchestration/kubernetes-upgrade-cluster-manual.md deleted file mode 100644 index 212443702bb..00000000000 --- a/src/current/_includes/v21.2/orchestration/kubernetes-upgrade-cluster-manual.md +++ /dev/null @@ -1,242 +0,0 @@ -1. Verify that you can upgrade. - - To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release](../releases/index.html) and not a testing release (alpha/beta). - - Therefore, in order to upgrade to v21.2, you must be on a production release of v21.1. - - 1. If you are upgrading to v21.2 from a production release earlier than v21.1, or from a testing release (alpha/beta), first [upgrade to a production release of v21.1](../v21.1/operate-cockroachdb-kubernetes.html?filters=manual#upgrade-the-cluster). Be sure to complete all the steps. - - 1. Then return to this page and perform a second upgrade to v21.2. - - 1. If you are upgrading from any production release of v21.1, or from any earlier v21.2 patch release, you do not have to go through intermediate releases; continue to step 2. - -1. Verify the overall health of your cluster using the [DB Console](ui-overview.html). On the **Overview**: - - Under **Node Status**, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or [decommission](scale-cockroachdb-kubernetes.html?filters=manual#remove-nodes) them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually). - - Under **Replication Status**, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to [identify and resolve the cause of range under-replication and/or unavailability](cluster-setup-troubleshooting.html#replication-issues) before beginning your upgrade. - - In the **Node List**: - - Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over. - - Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to **Metrics > Dashboard: Hardware** and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider [adding nodes](scale-cockroachdb-kubernetes.html?filters=manual#add-nodes) to your cluster before beginning your upgrade. - -1. Review the [backward-incompatible changes in v21.2](../releases/v21.2.html#v21-2-0-backward-incompatible-changes) and [deprecated features](../releases/v21.2.html#v21-2-0-deprecations). If any affect your deployment, make the necessary changes before starting the rolling upgrade to v21.2. - -1. Decide how the upgrade will be finalized. - - By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain [features and performance improvements introduced in v21.2](upgrade-cockroach-version.html#features-that-require-upgrade-finalization). After finalization, however, it will no longer be possible to perform a downgrade to v21.1. In the event of a catastrophic failure or corruption, the only option is to start a new cluster using the old binary and then restore from a [backup](take-full-and-incremental-backups.html) created prior to the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in a later step. - - {{site.data.alerts.callout_info}} - Finalization only applies when performing a major version upgrade (for example, from v21.1.x to v21.2). Patch version upgrades (for example, within the v21.2.x series) can always be downgraded. - {{site.data.alerts.end}} - - {% if page.secure == true %} - - 1. Start the CockroachDB [built-in SQL client](cockroach-sql.html). For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html?filters=manual#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ - - {% endif %} - - 1. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html) to the version you are upgrading from: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.preserve_downgrade_option = '21.1'; - ~~~ - - 1. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Add a [partition](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update) to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., `cockroachdb-0`, `cockroachdb-1`, `cockroachdb-2`) the partition value should be 2: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb \ - -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}' - ~~~ - - ~~~ - statefulset.apps/cockroachdb patched - ~~~ - -1. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb \ - --type='json' \ - -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:{{page.release_info.version}}"}]' - ~~~ - - ~~~ - statefulset.apps/cockroachdb patched - ~~~ - -1. Check the status of your cluster's pods. You should see one of them being restarted: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 2m - cockroachdb-1 1/1 Running 0 2m - cockroachdb-2 0/1 Terminating 0 1m - ... - ~~~ - -1. After the pod has been restarted with the new image, start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% if page.secure == true %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - {% else %} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ - - {% endif %} - -1. Run the following SQL query to verify that the number of under-replicated ranges is zero: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status; - ~~~ - - ~~~ - ranges_underreplicated - -------------------------- - 0 - (1 row) - ~~~ - - This indicates that it is safe to proceed to the next pod. - -1. Exit the SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Decrement the partition value by 1 to allow the next pod in the cluster to update: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb \ - -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":1}}}}' - ~~~ - - ~~~ - statefulset.apps/cockroachdb patched - ~~~ - -1. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be `0`). - -1. Check the image of each pod to confirm that all have been upgraded: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods \ - -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' - ~~~ - - ~~~ - cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}} - cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}} - cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}} - ... - ~~~ - - You can also check the CockroachDB version of each node in the [DB Console](ui-cluster-overview-page.html#node-details). - -1. If you disabled auto-finalization earlier, monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day). - - If you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary. - - {{site.data.alerts.callout_info}} - This is only possible when performing a major version upgrade (for example, from v21.1.x to v21.2). Patch version upgrades (for example, within the v21.2.x series) are auto-finalized. - {{site.data.alerts.end}} - - To finalize the upgrade, re-enable auto-finalization: - - {% if page.secure == true %} - - 1. Start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ - - {% endif %} - - 1. Re-enable auto-finalization: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > RESET CLUSTER SETTING cluster.preserve_downgrade_option; - ~~~ - - 1. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v21.2/orchestration/local-start-kubernetes.md b/src/current/_includes/v21.2/orchestration/local-start-kubernetes.md deleted file mode 100644 index e504d052dbe..00000000000 --- a/src/current/_includes/v21.2/orchestration/local-start-kubernetes.md +++ /dev/null @@ -1,24 +0,0 @@ -## Before you begin - -Before getting started, it's helpful to review some Kubernetes-specific terminology: - -Feature | Description ---------|------------ -[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. -[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node. - -## Step 1. Start Kubernetes - -1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation. - - {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}} - -2. Start a local Kubernetes cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ minikube start - ~~~ diff --git a/src/current/_includes/v21.2/orchestration/monitor-cluster.md b/src/current/_includes/v21.2/orchestration/monitor-cluster.md deleted file mode 100644 index 5cadf9609a3..00000000000 --- a/src/current/_includes/v21.2/orchestration/monitor-cluster.md +++ /dev/null @@ -1,95 +0,0 @@ -To access the cluster's [DB Console](ui-overview.html): - -{% if page.secure == true %} - -1. On secure clusters, [certain pages of the DB Console](ui-overview.html#db-console-access) can only be accessed by `admin` users. - - Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach/cockroach-certs \ - --host=cockroachdb-public - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ -
- -
- $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public -
- -1. Assign `roach` to the `admin` role (you only need to do this once): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT admin TO roach; - ~~~ - -1. Exit the SQL shell and pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -{% endif %} - -1. In a new terminal window, port-forward from your local machine to the `cockroachdb-public` service: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward service/cockroachdb-public 8080 - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward service/cockroachdb-public 8080 - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward service/my-release-cockroachdb-public 8080 - ~~~ -
- - ~~~ - Forwarding from 127.0.0.1:8080 -> 8080 - ~~~ - - {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the DB Console. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}} - -{% if page.secure == true %} - -1. Go to https://localhost:8080 and log in with the username and password you created earlier. - - {% include {{ page.version.version }}/misc/chrome-localhost.md %} - -{% else %} - -1. Go to http://localhost:8080. - -{% endif %} - -1. In the UI, verify that the cluster is running as expected: - - View the [Node List](ui-cluster-overview-page.html#node-list) to ensure that all nodes successfully joined the cluster. - - Click the **Databases** tab on the left to verify that `bank` is listed. diff --git a/src/current/_includes/v21.2/orchestration/operator-check-namespace.md b/src/current/_includes/v21.2/orchestration/operator-check-namespace.md deleted file mode 100644 index d6c70aa03dc..00000000000 --- a/src/current/_includes/v21.2/orchestration/operator-check-namespace.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -All `kubectl` steps should be performed in the [namespace where you installed the Operator](deploy-cockroachdb-with-kubernetes.html#install-the-operator). By default, this is `cockroach-operator-system`. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/start-cockroachdb-helm-insecure.md b/src/current/_includes/v21.2/orchestration/start-cockroachdb-helm-insecure.md deleted file mode 100644 index 21061db1776..00000000000 --- a/src/current/_includes/v21.2/orchestration/start-cockroachdb-helm-insecure.md +++ /dev/null @@ -1,115 +0,0 @@ -{{site.data.alerts.callout_danger}} -The CockroachDB Helm chart is undergoing maintenance for compatibility with Kubernetes versions 1.17 through 1.21 (the latest version as of this writing). No new feature development is currently planned. For new production and local deployments, we currently recommend using a manual configuration (**Configs** option). If you are experiencing issues with a Helm deployment on production, contact our [Support team](https://support.cockroachlabs.com/). -{{site.data.alerts.end}} - -1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo add cockroachdb https://charts.cockroachdb.com/ - ~~~ - - ~~~ - "cockroachdb" has been added to your repositories - ~~~ - -2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo update - ~~~ - -3. Modify our Helm chart's [`values.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml) parameters for your deployment scenario. - - Create a `my-values.yaml` file to override the defaults in `values.yaml`, substituting your own values in this example based on the guidelines below. - - {% include_cached copy-clipboard.html %} - ~~~ - statefulset: - resources: - limits: - memory: "8Gi" - requests: - memory: "8Gi" - conf: - cache: "2Gi" - max-sql-memory: "2Gi" - ~~~ - - 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`. - - {{site.data.alerts.callout_success}} - For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`. - {{site.data.alerts.end}} - - 1. For an insecure deployment, set `tls.enabled` to `false`. For clarity, this example includes the example configuration from the previous steps. - - {% include_cached copy-clipboard.html %} - ~~~ - statefulset: - resources: - limits: - memory: "8Gi" - requests: - memory: "8Gi" - conf: - cache: "2Gi" - max-sql-memory: "2Gi" - tls: - enabled: false - ~~~ - - 1. You may want to modify `storage.persistentVolume.size` and `storage.persistentVolume.storageClass` for your use case. This chart defaults to 100Gi of disk space per pod. For more details on customizing disks for performance, see [these instructions](kubernetes-performance.html#disk-type). - - {{site.data.alerts.callout_info}} - If necessary, you can [expand disk size](/docs/{{ page.version.version }}/configure-cockroachdb-kubernetes.html?filters=helm#expand-disk-size) after the cluster is live. - {{site.data.alerts.end}} - -1. Install the CockroachDB Helm chart. - - Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`. - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - -1. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m - my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h - ~~~ - -1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m - pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m - pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to logs for a pod, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/orchestration/start-cockroachdb-helm-secure.md b/src/current/_includes/v21.2/orchestration/start-cockroachdb-helm-secure.md deleted file mode 100644 index 078071141fc..00000000000 --- a/src/current/_includes/v21.2/orchestration/start-cockroachdb-helm-secure.md +++ /dev/null @@ -1,112 +0,0 @@ -The CockroachDB Helm chart is compatible with Kubernetes versions 1.22 and earlier. - -The CockroachDB Helm chart is currently not under active development, and no new features are planned. However, Cockroach Labs remains committed to fully supporting the Helm chart by addressing defects, providing security patches, and addressing breaking changes due to deprecations in Kubernetes APIs. - -A deprecation notice for the Helm chart will be provided to customers a minimum of 6 months in advance of actual deprecation. - -{{site.data.alerts.callout_danger}} -If you are running a secure Helm deployment on Kubernetes 1.22 and later, you must migrate away from using the Kubernetes CA for cluster authentication. For details, see [Certificate management](secure-cockroachdb-kubernetes.html?filters=helm#migration-to-self-signer). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -Secure CockroachDB deployments on Amazon EKS via Helm are [not yet supported](https://github.com/cockroachdb/cockroach/issues/38847). -{{site.data.alerts.end}} - -1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo add cockroachdb https://charts.cockroachdb.com/ - ~~~ - - ~~~ - "cockroachdb" has been added to your repositories - ~~~ - -1. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo update - ~~~ - -1. The cluster configuration is set in the Helm chart's [values file](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml). - - {{site.data.alerts.callout_info}} - By default, the Helm chart specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Configure the Cluster](configure-cockroachdb-kubernetes.html?filters=helm). - {{site.data.alerts.end}} - - Before deploying, modify some parameters in our Helm chart's [values file](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml): - - 1. Create a local YAML file (e.g., `my-values.yaml`) to specify your custom values. These will be used to override the defaults in `values.yaml`. - - 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`. - - {{site.data.alerts.callout_success}} - For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ yaml - conf: - cache: "2Gi" - max-sql-memory: "2Gi" - ~~~ - - The Helm chart defaults to a secure deployment by automatically setting `tls.enabled` to `true`. - - {{site.data.alerts.callout_info}} - By default, the Helm chart will generate and sign 1 client and 1 node certificate to secure the cluster. To authenticate using your own CA, see [Certificate management](/docs/{{ page.version.version }}/secure-cockroachdb-kubernetes.html?filters=helm#use-a-custom-ca). - {{site.data.alerts.end}} - -1. Install the CockroachDB Helm chart, specifying your custom values file. - - Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`. - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. - {{site.data.alerts.end}} - - {{site.data.alerts.callout_danger}} - To allow the CockroachDB pods to deploy successfully, do not set the [`--wait` flag](https://helm.sh/docs/intro/using_helm/#helpful-options-for-installupgraderollback) when using Helm commands. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm install my-release --values {custom-values}.yaml cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - -1. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m - my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h - ~~~ - -1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m - pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m - pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to logs for a pod, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/orchestration/start-cockroachdb-insecure.md b/src/current/_includes/v21.2/orchestration/start-cockroachdb-insecure.md deleted file mode 100644 index c0692798b67..00000000000 --- a/src/current/_includes/v21.2/orchestration/start-cockroachdb-insecure.md +++ /dev/null @@ -1,114 +0,0 @@ -1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it. - - Download [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml - ~~~ - - {{site.data.alerts.callout_info}} - By default, this manifest specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Resource management](configure-cockroachdb-kubernetes.html?filters=manual). - {{site.data.alerts.end}} - - Use the file to create the StatefulSet and start the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset.yaml - ~~~ - - ~~~ - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - - Alternatively, if you'd rather start with a configuration file that has been customized for performance: - - 1. Download our [performance version of `cockroachdb-statefulset-insecure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml - ~~~ - - 2. Modify the file wherever there is a `TODO` comment. - - 3. Use the file to create the StatefulSet and start the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset-insecure.yaml - ~~~ - -2. Confirm that three pods are `Running` successfully. Note that they will not - be considered `Ready` until after the cluster has been initialized: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - -3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get persistentvolumes - ~~~ - - ~~~ - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE - pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s - pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s - pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s - ~~~ - -4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml - ~~~ - - ~~~ - job.batch/cluster-init created - ~~~ - -5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get job cluster-init - ~~~ - - ~~~ - NAME COMPLETIONS DURATION AGE - cluster-init 1/1 7s 27s - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cluster-init-cqf8l 0/1 Completed 0 56s - cockroachdb-0 1/1 Running 0 7m51s - cockroachdb-1 1/1 Running 0 7m51s - cockroachdb-2 1/1 Running 0 7m51s - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/orchestration/start-cockroachdb-local-helm-insecure.md b/src/current/_includes/v21.2/orchestration/start-cockroachdb-local-helm-insecure.md deleted file mode 100644 index 494b3e6207e..00000000000 --- a/src/current/_includes/v21.2/orchestration/start-cockroachdb-local-helm-insecure.md +++ /dev/null @@ -1,65 +0,0 @@ -1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo add cockroachdb https://charts.cockroachdb.com/ - ~~~ - - ~~~ - "cockroachdb" has been added to your repositories - ~~~ - -2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo update - ~~~ - -3. Install the CockroachDB Helm chart. - - Provide a "release" name to identify and track this particular deployment of the chart. - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm install my-release cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - -4. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m - my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h - ~~~ - -5. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m - pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m - pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/orchestration/start-cockroachdb-local-insecure.md b/src/current/_includes/v21.2/orchestration/start-cockroachdb-local-insecure.md deleted file mode 100644 index 37fe8e46939..00000000000 --- a/src/current/_includes/v21.2/orchestration/start-cockroachdb-local-insecure.md +++ /dev/null @@ -1,83 +0,0 @@ -1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml - ~~~ - - ~~~ - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - -2. Confirm that three pods are `Running` successfully. Note that they will not - be considered `Ready` until after the cluster has been initialized: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - -3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE - pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s - pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s - pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s - ~~~ - -4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml - ~~~ - - ~~~ - job.batch/cluster-init created - ~~~ - -5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get job cluster-init - ~~~ - - ~~~ - NAME COMPLETIONS DURATION AGE - cluster-init 1/1 7s 27s - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cluster-init-cqf8l 0/1 Completed 0 56s - cockroachdb-0 1/1 Running 0 7m51s - cockroachdb-1 1/1 Running 0 7m51s - cockroachdb-2 1/1 Running 0 7m51s - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/orchestration/start-cockroachdb-operator-secure.md b/src/current/_includes/v21.2/orchestration/start-cockroachdb-operator-secure.md deleted file mode 100644 index 1dc8ef93326..00000000000 --- a/src/current/_includes/v21.2/orchestration/start-cockroachdb-operator-secure.md +++ /dev/null @@ -1,121 +0,0 @@ -### Install the Operator - -{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %} - -1. Apply the [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) for the Operator: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/crds.yaml - ~~~ - - ~~~ - customresourcedefinition.apiextensions.k8s.io/crdbclusters.crdb.cockroachlabs.com created - ~~~ - -1. By default, the Operator is configured to install in the `cockroach-operator-system` namespace and to manage CockroachDB instances for all namespaces on the cluster. - - If you'd like to change either of these defaults: - - 1. Download the Operator manifest: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml - ~~~ - - 1. To use a custom namespace, edit all instances of `namespace: cockroach-operator-system` with your desired namespace. - - 1. To limit the namespaces that will be monitored, set the `WATCH_NAMESPACE` environment variable in the `Deployment` pod spec. This can be set to a single namespace, or a comma-delimited set of namespaces. When set, only those `CrdbCluster` resources in the supplied namespace(s) will be reconciled. - - 1. Instead of using the command below, apply your local version of the Operator manifest to the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f operator.yaml - ~~~ - - If you want to use the default namespace settings: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml - ~~~ - - ~~~ - clusterrole.rbac.authorization.k8s.io/cockroach-database-role created - serviceaccount/cockroach-database-sa created - clusterrolebinding.rbac.authorization.k8s.io/cockroach-database-rolebinding created - role.rbac.authorization.k8s.io/cockroach-operator-role created - clusterrolebinding.rbac.authorization.k8s.io/cockroach-operator-rolebinding created - clusterrole.rbac.authorization.k8s.io/cockroach-operator-role created - serviceaccount/cockroach-operator-sa created - rolebinding.rbac.authorization.k8s.io/cockroach-operator-default created - deployment.apps/cockroach-operator created - ~~~ - -1. Set your current namespace to the one used by the Operator. For example, to use the Operator's default namespace: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl config set-context --current --namespace=cockroach-operator-system - ~~~ - -1. Validate that the Operator is running: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroach-operator-6f7b86ffc4-9ppkv 1/1 Running 0 54s - ~~~ - -### Initialize the cluster - -1. Download `example.yaml`, a custom resource that tells the Operator how to configure the Kubernetes cluster. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/examples/example.yaml - ~~~ - - {{site.data.alerts.callout_info}} - By default, this custom resource specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Configure the Cluster](configure-cockroachdb-kubernetes.html). - {{site.data.alerts.end}} - - {{site.data.alerts.callout_info}} - By default, the Operator will generate and sign 1 client and 1 node certificate to secure the cluster. This means that if you do not provide a CA, a `cockroach`-generated CA is used. If you want to authenticate using your own CA, [specify the generated secrets in the custom resource](secure-cockroachdb-kubernetes.html#use-a-custom-ca) **before** proceeding to the next step. - {{site.data.alerts.end}} - -1. Apply `example.yaml`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f example.yaml - ~~~ - - The Operator will create a StatefulSet and initialize the nodes as a cluster. - - ~~~ - crdbcluster.crdb.cockroachlabs.com/cockroachdb created - ~~~ - -1. Check that the pods were created: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroach-operator-6f7b86ffc4-9t9zb 1/1 Running 0 3m22s - cockroachdb-0 1/1 Running 0 2m31s - cockroachdb-1 1/1 Running 0 102s - cockroachdb-2 1/1 Running 0 46s - ~~~ - - Each pod should have `READY` status soon after being created. \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/start-cockroachdb-secure.md b/src/current/_includes/v21.2/orchestration/start-cockroachdb-secure.md deleted file mode 100644 index 972cabc2d8e..00000000000 --- a/src/current/_includes/v21.2/orchestration/start-cockroachdb-secure.md +++ /dev/null @@ -1,108 +0,0 @@ -### Configure the cluster - -1. Download and modify our [StatefulSet configuration](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml - ~~~ - -1. Update `secretName` with the name of the corresponding node secret. - - The secret names depend on your method for generating secrets. For example, if you follow the below [steps using `cockroach cert`](#create-certificates), use this secret name: - - {% include_cached copy-clipboard.html %} - ~~~ yaml - secret: - secretName: cockroachdb.node - ~~~ - -1. The StatefulSet configuration deploys CockroachDB into the `default` namespace. To use a different namespace, search for `kind: RoleBinding` and change its `subjects.namespace` property to the name of the namespace. Otherwise, a `failed to read secrets` error occurs when you attempt to follow the steps in [Initialize the cluster](#initialize-the-cluster). - -{{site.data.alerts.callout_info}} -By default, this manifest specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Configure the Cluster](configure-cockroachdb-kubernetes.html?filters=manual). -{{site.data.alerts.end}} - -### Create certificates - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/orchestration/kubernetes-cockroach-cert.md %} - -### Initialize the cluster - -1. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset.yaml - ~~~ - - ~~~ - serviceaccount/cockroachdb created - role.rbac.authorization.k8s.io/cockroachdb created - rolebinding.rbac.authorization.k8s.io/cockroachdb created - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - -1. Initialize the CockroachDB cluster: - - 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - - 1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m - pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m - pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m - ~~~ - - 1. Run `cockroach init` on one of the pods to complete the node startup process and have them join together as a cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-0 \ - -- /cockroach/cockroach init \ - --certs-dir=/cockroach/cockroach-certs - ~~~ - - ~~~ - Cluster successfully initialized - ~~~ - - 1. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 3m - cockroachdb-1 1/1 Running 0 3m - cockroachdb-2 1/1 Running 0 3m - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/start-kubernetes.md b/src/current/_includes/v21.2/orchestration/start-kubernetes.md deleted file mode 100644 index 5168d470465..00000000000 --- a/src/current/_includes/v21.2/orchestration/start-kubernetes.md +++ /dev/null @@ -1,98 +0,0 @@ -You can use the hosted [Google Kubernetes Engine (GKE)](#hosted-gke) service or the hosted [Amazon Elastic Kubernetes Service (EKS)](#hosted-eks) to quickly start Kubernetes. - -{{site.data.alerts.callout_info}} -GKE or EKS are not required to run CockroachDB on Kubernetes. A manual GCE or AWS cluster with the [minimum recommended Kubernetes version](#kubernetes-version) and at least 3 pods, each presenting [sufficient resources](#resources) to start a CockroachDB node, can also be used. -{{site.data.alerts.end}} - -### Hosted GKE - -1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation. - - This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation. - - {{site.data.alerts.callout_success}} - The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the DB Console using the steps in this guide. - {{site.data.alerts.end}} - -2. From your local workstation, start the Kubernetes cluster, specifying one of the available [regions](https://cloud.google.com/compute/docs/regions-zones#available) (e.g., `us-east1`): - - {{site.data.alerts.callout_success}} - Since this region can differ from your default `gcloud` region, be sure to include the `--region` flag to run `gcloud` commands against this cluster. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters create cockroachdb --machine-type n2-standard-4 --region {region-name} --num-nodes 1 - ~~~ - - ~~~ - Creating cluster cockroachdb...done. - ~~~ - - This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--region` flag specifies a [regional three-zone cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster), and `--num-nodes` specifies one Kubernetes worker node in each zone. - - The `--machine-type` flag tells the node pool to use the [`n2-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations). - - The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster. - -3. Get the email address associated with your Google Cloud account: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ gcloud info | grep Account - ~~~ - - ~~~ - Account: [your.google.cloud.email@example.org] - ~~~ - - {{site.data.alerts.callout_danger}} - This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com. - {{site.data.alerts.end}} - -4. [Create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the address from the previous step: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create clusterrolebinding $USER-cluster-admin-binding \ - --clusterrole=cluster-admin \ - --user={your.google.cloud.email@example.org} - ~~~ - - ~~~ - clusterrolebinding.rbac.authorization.k8s.io/your.username-cluster-admin-binding created - ~~~ - -### Hosted EKS - -1. Complete the steps described in the [EKS Getting Started](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) documentation. - - This includes installing and configuring the AWS CLI and `eksctl`, which is the command-line tool used to create and delete Kubernetes clusters on EKS, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation. - - {{site.data.alerts.callout_info}} - If you are running [EKS-Anywhere](https://aws.amazon.com/eks/eks-anywhere/), CockroachDB requires that you [configure your default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/) to auto-provision persistent volumes. Alternatively, you can define a custom storage configuration as required by your install pattern. - {{site.data.alerts.end}} - -2. From your local workstation, start the Kubernetes cluster: - - {{site.data.alerts.callout_success}} - To ensure that all 3 nodes can be placed into a different availability zone, you may want to first [confirm that at least 3 zones are available in the region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#availability-zones-describe) for your account. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ eksctl create cluster \ - --name cockroachdb \ - --nodegroup-name standard-workers \ - --node-type m5.xlarge \ - --nodes 3 \ - --nodes-min 1 \ - --nodes-max 4 \ - --node-ami auto - ~~~ - - This creates EKS instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--node-type` flag tells the node pool to use the [`m5.xlarge`](https://aws.amazon.com/ec2/instance-types/) instance type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations). - - Cluster provisioning usually takes between 10 and 15 minutes. Do not move on to the next step until you see a message like `[✔] EKS cluster "cockroachdb" in "us-east-1" region is ready` and details about your cluster. - -3. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/home) to verify that the stacks `eksctl-cockroachdb-cluster` and `eksctl-cockroachdb-nodegroup-standard-workers` were successfully created. Be sure that your region is selected in the console. \ No newline at end of file diff --git a/src/current/_includes/v21.2/orchestration/test-cluster-insecure.md b/src/current/_includes/v21.2/orchestration/test-cluster-insecure.md deleted file mode 100644 index dd4f47561ae..00000000000 --- a/src/current/_includes/v21.2/orchestration/test-cluster-insecure.md +++ /dev/null @@ -1,72 +0,0 @@ -1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ -
- -2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - balance DECIMAL - ); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts (balance) - VALUES - (1000.50), (20000), (380), (500), (55000); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +--------------------------------------+---------+ - 6f123370-c48c-41ff-b384-2c185590af2b | 380 - 990c9148-1ea0-4861-9da7-fd0e65b0a7da | 1000.50 - ac31c671-40bf-4a7b-8bee-452cff8a4026 | 500 - d58afd93-5be9-42ba-b2e2-dc00dcedf409 | 20000 - e6d8f696-87f5-4d3c-a377-8e152fdc27f7 | 55000 - (5 rows) - ~~~ - -3. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v21.2/orchestration/test-cluster-secure.md b/src/current/_includes/v21.2/orchestration/test-cluster-secure.md deleted file mode 100644 index 8e72dd5b893..00000000000 --- a/src/current/_includes/v21.2/orchestration/test-cluster-secure.md +++ /dev/null @@ -1,144 +0,0 @@ -To use the CockroachDB SQL client, first launch a secure pod running the `cockroach` binary. - -
- -{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %} - -{% include_cached copy-clipboard.html %} -~~~ shell -$ kubectl create \ --f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/examples/client-secure-operator.yaml -~~~ - -1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - # Welcome to the CockroachDB SQL shell. - # All statements must be terminated by a semicolon. - # To exit, type: \q. - # - # Server version: CockroachDB CCL v21.1.0 (x86_64-unknown-linux-gnu, built 2021/04/23 13:54:57, go1.13.14) (same version as client) - # Cluster ID: a96791d9-998c-4683-a3d3-edbf425bbf11 - # - # Enter \? for a brief introduction. - # - root@cockroachdb-public:26257/defaultdb> - ~~~ - -{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %} -
- -
- -{% include_cached copy-clipboard.html %} -~~~ shell -$ kubectl create \ --f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/client.yaml -~~~ - -~~~ -pod/cockroachdb-client-secure created -~~~ - -1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - # Welcome to the cockroach SQL interface. - # All statements must be terminated by a semicolon. - # To exit: CTRL + D. - # - # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - - # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4 - # - # Enter \? for a brief introduction. - # - root@cockroachdb-public:26257/defaultdb> - ~~~ - - {{site.data.alerts.callout_success}} - This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command. - - If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`. - {{site.data.alerts.end}} - -{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %} -
- -
-From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/examples/client-secure.yaml) file to launch a pod and keep it running indefinitely. - -1. Download the file: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -OOOOOOOOO \ - https://raw.githubusercontent.com/cockroachdb/helm-charts/master/examples/client-secure.yaml - ~~~ - -1. In the file, set the following values: - - `spec.serviceAccountName: my-release-cockroachdb` - - `spec.image: cockroachdb/cockroach: {your CockroachDB version}` - - `spec.volumes[0].project.sources[0].secret.name: my-release-cockroachdb-client-secret` - -1. Use the file to launch a pod and keep it running indefinitely: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f client-secure.yaml - ~~~ - - ~~~ - pod "cockroachdb-client-secure" created - ~~~ - -1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=./cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - # Welcome to the cockroach SQL interface. - # All statements must be terminated by a semicolon. - # To exit: CTRL + D. - # - # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - - # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4 - # - # Enter \? for a brief introduction. - # - root@my-release-cockroachdb-public:26257/defaultdb> - ~~~ - - {{site.data.alerts.callout_success}} - This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command. - - If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`. - {{site.data.alerts.end}} - -{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %} -
\ No newline at end of file diff --git a/src/current/_includes/v21.2/performance/check-rebalancing-after-partitioning.md b/src/current/_includes/v21.2/performance/check-rebalancing-after-partitioning.md deleted file mode 100644 index 701da4d2e21..00000000000 --- a/src/current/_includes/v21.2/performance/check-rebalancing-after-partitioning.md +++ /dev/null @@ -1,41 +0,0 @@ -Over the next minutes, CockroachDB will rebalance all partitions based on the constraints you defined. - -To check this at a high level, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is still close to even across all nodes but much higher than before partitioning: - -Perf tuning rebalancing - -To check at a more granular level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement on the `vehicles` table: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql \ -{{page.certs}} \ ---host=
\ ---database=movr \ ---execute="SELECT * FROM \ -[SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles] \ -WHERE \"start_key\" IS NOT NULL \ - AND \"start_key\" NOT LIKE '%Prefix%';" -~~~ - -~~~ - start_key | end_key | range_id | replicas | lease_holder -+------------------+----------------------------+----------+----------+--------------+ - /"boston" | /"boston"/PrefixEnd | 105 | {1,2,3} | 3 - /"los angeles" | /"los angeles"/PrefixEnd | 121 | {7,8,9} | 8 - /"new york" | /"new york"/PrefixEnd | 101 | {1,2,3} | 3 - /"san francisco" | /"san francisco"/PrefixEnd | 117 | {7,8,9} | 8 - /"seattle" | /"seattle"/PrefixEnd | 113 | {4,5,6} | 5 - /"washington dc" | /"washington dc"/PrefixEnd | 109 | {1,2,3} | 1 -(6 rows) -~~~ - -For reference, here's how the nodes map to zones: - -Node IDs | Zone ----------|----- -1-3 | `us-east1-b` (South Carolina) -4-6 | `us-west1-a` (Oregon) -7-9 | `us-west2-a` (Los Angeles) - -We can see that, after partitioning, the replicas for New York, Boston, and Washington DC are located on nodes 1-3 in `us-east1-b`, replicas for Seattle are located on nodes 4-6 in `us-west1-a`, and replicas for San Francisco and Los Angeles are located on nodes 7-9 in `us-west2-a`. diff --git a/src/current/_includes/v21.2/performance/check-rebalancing.md b/src/current/_includes/v21.2/performance/check-rebalancing.md deleted file mode 100644 index 8694c920127..00000000000 --- a/src/current/_includes/v21.2/performance/check-rebalancing.md +++ /dev/null @@ -1,33 +0,0 @@ -Since you started each node with the `--locality` flag set to its GCE zone, over the next minutes, CockroachDB will rebalance data evenly across the zones. - -To check this, access the DB Console on any node at `:8080` and look at the **Node List**. You'll see that the range count is more or less even across all nodes: - -Perf tuning rebalancing - -For reference, here's how the nodes map to zones: - -Node IDs | Zone ----------|----- -1-3 | `us-east1-b` (South Carolina) -4-6 | `us-west1-a` (Oregon) -7-9 | `us-west2-a` (Los Angeles) - -To verify even balancing at range level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql \ -{{page.certs}} \ ---host=
\ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles;" -~~~ - -~~~ - start_key | end_key | range_id | replicas | lease_holder -+-----------+---------+----------+----------+--------------+ - NULL | NULL | 33 | {3,4,7} | 7 -(1 row) -~~~ - -In this case, we can see that, for the single range containing `vehicles` data, one replica is in each zone, and the leaseholder is in the `us-west2-a` zone. diff --git a/src/current/_includes/v21.2/performance/configure-network.md b/src/current/_includes/v21.2/performance/configure-network.md deleted file mode 100644 index e9abeb94df3..00000000000 --- a/src/current/_includes/v21.2/performance/configure-network.md +++ /dev/null @@ -1,18 +0,0 @@ -CockroachDB requires TCP communication on two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster) -- **8080** (`tcp:8080`) for accessing the DB Console - -Since GCE instances communicate on their internal IP addresses by default, you do not need to take any action to enable inter-node communication. However, to access the DB Console from your local network, you must [create a firewall rule for your project](https://cloud.google.com/vpc/docs/using-firewalls): - -Field | Recommended Value -------|------------------ -Name | **cockroachweb** -Source filter | IP ranges -Source IP ranges | Your local network's IP ranges -Allowed protocols | **tcp:8080** -Target tags | `cockroachdb` - -{{site.data.alerts.callout_info}} -The **tag** feature will let you easily apply the rule to your instances. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/performance/import-movr.md b/src/current/_includes/v21.2/performance/import-movr.md deleted file mode 100644 index c61a32f64ce..00000000000 --- a/src/current/_includes/v21.2/performance/import-movr.md +++ /dev/null @@ -1,160 +0,0 @@ -Now you'll import Movr data representing users, vehicles, and rides in 3 eastern US cities (New York, Boston, and Washington DC) and 3 western US cities (Los Angeles, San Francisco, and Seattle). - -1. Still on the fourth instance, start the [built-in SQL shell](cockroach-sql.html), pointing it at one of the CockroachDB nodes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql {{page.certs}} --host=
- ~~~ - -2. Create the `movr` database and set it as the default: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE movr; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET DATABASE = movr; - ~~~ - -3. Use the [`IMPORT`](import.html) statement to create and populate the `users`, `vehicles,` and `rides` tables: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE users ( - id UUID NOT NULL, - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/users/n1.0.csv' - ); - ~~~ - - ~~~ - job_id | status | fraction_completed | rows | index_entries | system_records | bytes - +--------------------+-----------+--------------------+------+---------------+----------------+--------+ - 390345990764396545 | succeeded | 1 | 1998 | 0 | 0 | 241052 - (1 row) - - Time: 2.882582355s - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE vehicles ( - id UUID NOT NULL, - city STRING NOT NULL, - type STRING NULL, - owner_id UUID NULL, - creation_time TIMESTAMP NULL, - status STRING NULL, - ext JSON NULL, - mycol STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/vehicles/n1.0.csv' - ); - ~~~ - - ~~~ - job_id | status | fraction_completed | rows | index_entries | system_records | bytes - +--------------------+-----------+--------------------+-------+---------------+----------------+---------+ - 390346109887250433 | succeeded | 1 | 19998 | 19998 | 0 | 3558767 - (1 row) - - Time: 5.803841493s - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE rides ( - id UUID NOT NULL, - city STRING NOT NULL, - vehicle_city STRING NULL, - rider_id UUID NULL, - vehicle_id UUID NULL, - start_address STRING NULL, - end_address STRING NULL, - start_time TIMESTAMP NULL, - end_time TIMESTAMP NULL, - revenue DECIMAL(10,2) NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC), - INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC), - CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.0.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.1.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.2.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.3.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.4.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.5.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.6.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.7.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.8.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.9.csv' - ); - ~~~ - - ~~~ - job_id | status | fraction_completed | rows | index_entries | system_records | bytes - +--------------------+-----------+--------------------+--------+---------------+----------------+-----------+ - 390346325693792257 | succeeded | 1 | 999996 | 1999992 | 0 | 339741841 - (1 row) - - Time: 44.620371424s - ~~~ - - {{site.data.alerts.callout_success}} - You can observe the progress of imports as well as all schema change operations (e.g., adding secondary indexes) on the [**Jobs** page](ui-jobs-page.html) of the DB Console. - {{site.data.alerts.end}} - -7. Logically, there should be a number of [foreign key](foreign-key.html) relationships between the tables: - - Referencing columns | Referenced columns - --------------------|------------------- - `vehicles.city`, `vehicles.owner_id` | `users.city`, `users.id` - `rides.city`, `rides.rider_id` | `users.city`, `users.id` - `rides.vehicle_city`, `rides.vehicle_id` | `vehicles.city`, `vehicles.id` - - As mentioned earlier, it wasn't possible to put these relationships in place during `IMPORT`, but it was possible to create the required secondary indexes. Now, let's add the foreign key constraints: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE vehicles - ADD CONSTRAINT fk_city_ref_users - FOREIGN KEY (city, owner_id) - REFERENCES users (city, id); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE rides - ADD CONSTRAINT fk_city_ref_users - FOREIGN KEY (city, rider_id) - REFERENCES users (city, id); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > ALTER TABLE rides - ADD CONSTRAINT fk_vehicle_city_ref_vehicles - FOREIGN KEY (vehicle_city, vehicle_id) - REFERENCES vehicles (city, id); - ~~~ - -4. Exit the built-in SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v21.2/performance/lease-preference-system-database.md b/src/current/_includes/v21.2/performance/lease-preference-system-database.md deleted file mode 100644 index 4bfbb8b4931..00000000000 --- a/src/current/_includes/v21.2/performance/lease-preference-system-database.md +++ /dev/null @@ -1,8 +0,0 @@ -To reduce latency while making {% if page.name == "online-schema-changes.md" %}online schema changes{% else %}[online schema changes](online-schema-changes.html){% endif %}, we recommend specifying a `lease_preference` [zone configuration](configure-replication-zones.html) on the `system` database to a single region and running all subsequent schema changes from a node within that region. For example, if the majority of online schema changes come from machines that are geographically close to `us-east1`, run the following: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE system CONFIGURE ZONE USING constraints = '{"+region=us-east1": 1}', lease_preferences = '[[+region=us-east1]]'; -~~~ - -Run all subsequent schema changes from a node in the specified region. diff --git a/src/current/_includes/v21.2/performance/overview.md b/src/current/_includes/v21.2/performance/overview.md deleted file mode 100644 index e3d66721de7..00000000000 --- a/src/current/_includes/v21.2/performance/overview.md +++ /dev/null @@ -1,38 +0,0 @@ -### Topology - -You'll start with a 3-node CockroachDB cluster in a single Google Compute Engine (GCE) zone, with an extra instance for running a client application workload: - -Perf tuning topology - -{{site.data.alerts.callout_info}} -Within a single GCE zone, network latency between instances should be sub-millisecond. -{{site.data.alerts.end}} - -You'll then scale the cluster to 9 nodes running across 3 GCE regions, with an extra instance in each region for a client application workload: - -Perf tuning topology - -{{site.data.alerts.callout_info}} -Network latencies will increase with geographic distance between nodes. You can observe this in the [Network Latency page](ui-network-latency-page.html) of the DB Console. -{{site.data.alerts.end}} - -To reproduce the performance demonstrated in this tutorial: - -- For each CockroachDB node, you'll use the [`n2-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 16 GB memory) with the Ubuntu 16.04 OS image and a [local SSD](https://cloud.google.com/compute/docs/disks/#localssds) disk. -- For running the client application workload, you'll use smaller instances, such as `n2-standard-2`. - -### Schema - -Your schema and data will be based on our open-source, fictional peer-to-peer vehicle-sharing application, [MovR](movr.html). - -Perf tuning schema - -A few notes about the schema: - -- There are just three self-explanatory tables: In essence, `users` represents the people registered for the service, `vehicles` represents the pool of vehicles for the service, and `rides` represents when and where users have participated. -- Each table has a composite primary key, with `city` being first in the key. Although not necessary initially in the single-region deployment, once you scale the cluster to multiple regions, these compound primary keys will enable you to [geo-partition data at the row level](partitioning.html#partition-using-primary-key) by `city`. As such, this tutorial demonstrates a schema designed for future scaling. -- The [`IMPORT`](import.html) feature you'll use to import the data does not support foreign keys, so you'll import the data without [foreign key constraints](foreign-key.html). However, the import will create the secondary indexes required to add the foreign keys later. - -### Important concepts - -To understand the techniques in this tutorial, and to be able to apply them in your own scenarios, it's important to first understand [how reads and writes work in CockroachDB](architecture/reads-and-writes-overview.html). Review that document before getting started here. diff --git a/src/current/_includes/v21.2/performance/partition-by-city.md b/src/current/_includes/v21.2/performance/partition-by-city.md deleted file mode 100644 index 2634a204d33..00000000000 --- a/src/current/_includes/v21.2/performance/partition-by-city.md +++ /dev/null @@ -1,419 +0,0 @@ -For this service, the most effective technique for improving read and write latency is to [geo-partition](partitioning.html) the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges. Once ranges are defined in this way, we can then use the [replication zone](configure-replication-zones.html) feature to pin partitions to specific locations, ensuring that read and write requests from users in a specific city do not have to leave that region. - -1. Partitioning is an enterprise feature, so start off by [registering for a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). - -2. Once you've received the trial license, SSH to any node in your cluster and [apply the license](licensing-faqs.html#set-a-license): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --host=
\ - --execute="SET CLUSTER SETTING cluster.organization = '';" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --host=
\ - --execute="SET CLUSTER SETTING enterprise.license = '';" - ~~~ - -3. Define partitions for all tables and their secondary indexes. - - Start with the `users` table: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER TABLE users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - Now define partitions for the `vehicles` table and its secondary indexes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER TABLE vehicles \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER INDEX vehicles_auto_index_fk_city_ref_users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - Next, define partitions for the `rides` table and its secondary indexes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER TABLE rides \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER INDEX rides_auto_index_fk_city_ref_users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER INDEX rides_auto_index_fk_vehicle_city_ref_vehicles \ - PARTITION BY LIST (vehicle_city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - Finally, drop an unused index on `rides` rather than partition it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="DROP INDEX rides_start_time_idx;" - ~~~ - - {{site.data.alerts.callout_info}} - The `rides` table contains 1 million rows, so dropping this index will take a few minutes. - {{site.data.alerts.end}} - -7. Now [create replication zones](configure-replication-zones.html#create-a-replication-zone-for-a-partition) to require city data to be stored on specific nodes based on node locality. - - City | Locality - -----|--------- - New York | `zone=us-east1-b` - Boston | `zone=us-east1-b` - Washington DC | `zone=us-east1-b` - Seattle | `zone=us-west1-a` - San Francisco | `zone=us-west2-a` - Los Angeles | `zone=us-west2-a` - - {{site.data.alerts.callout_info}} - Since our nodes are located in 3 specific GCE zones, we're only going to use the `zone=` portion of node locality. If we were using multiple zones per regions, we would likely use the `region=` portion of the node locality instead. - {{site.data.alerts.end}} - - Start with the `users` table partitions: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - Move on to the `vehicles` table and secondary index partitions: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - Finish with the `rides` table and secondary index partitions: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ diff --git a/src/current/_includes/v21.2/performance/scale-cluster.md b/src/current/_includes/v21.2/performance/scale-cluster.md deleted file mode 100644 index 6c368d663de..00000000000 --- a/src/current/_includes/v21.2/performance/scale-cluster.md +++ /dev/null @@ -1,61 +0,0 @@ -1. SSH to one of the `n2-standard-4` instances in the `us-west1-a` zone. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - {{page.certs}} \ - --advertise-host= \ - --join= \ - --locality=cloud=gce,region=us-west1,zone=us-west1-a \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two `n2-standard-4` instances in the `us-west1-a` zone. - -5. SSH to one of the `n2-standard-4` instances in the `us-west2-a` zone. - -6. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -7. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - {{page.certs}} \ - --advertise-host= \ - --join= \ - --locality=cloud=gce,region=us-west2,zone=us-west2-a \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -8. Repeat steps 5 - 7 for the other two `n2-standard-4` instances in the `us-west2-a` zone. diff --git a/src/current/_includes/v21.2/performance/start-cluster.md b/src/current/_includes/v21.2/performance/start-cluster.md deleted file mode 100644 index ee1d71149a7..00000000000 --- a/src/current/_includes/v21.2/performance/start-cluster.md +++ /dev/null @@ -1,60 +0,0 @@ -#### Start the nodes - -1. SSH to the first `n2-standard-4` instance. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - {{page.certs}} \ - --advertise-host= \ - --join=:26257,:26257,:26257 \ - --locality=cloud=gce,region=us-east1,zone=us-east1-b \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two `n2-standard-4` instances. Be sure to adjust the `--advertise-addr` flag each time. - -#### Initialize the cluster - -1. SSH to the fourth instance, the one not running a CockroachDB node. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -4. Run the [`cockroach init`](cockroach-init.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init {{page.certs}} --host=
- ~~~ - - Each node then prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. diff --git a/src/current/_includes/v21.2/performance/statement-contention.md b/src/current/_includes/v21.2/performance/statement-contention.md deleted file mode 100644 index 800d1e4c40f..00000000000 --- a/src/current/_includes/v21.2/performance/statement-contention.md +++ /dev/null @@ -1,6 +0,0 @@ -Find the transactions and statements within the transactions that are experiencing contention. CockroachDB has several tools to help you track down such transactions and statements: - -* In DB Console, visit the [Transactions](ui-transactions-page.html) and [Statements](ui-statements-page.html) pages and sort transactions and statements by contention. -* Query the [`crdb_internal.cluster_contended_indexes`](crdb-internal.html#cluster_contended_indexes) and [`crdb_internal.cluster_contended_tables`](crdb-internal.html#cluster_contended_tables) tables for your database to find the indexes and tables that are experiencing contention. - -After you identify the transactions or statements that are causing contention, follow the steps in the next section [to avoid contention](performance-best-practices-overview.html#avoid-transaction-contention). diff --git a/src/current/_includes/v21.2/performance/test-performance-after-partitioning.md b/src/current/_includes/v21.2/performance/test-performance-after-partitioning.md deleted file mode 100644 index 9754f6d9cd1..00000000000 --- a/src/current/_includes/v21.2/performance/test-performance-after-partitioning.md +++ /dev/null @@ -1,93 +0,0 @@ -After partitioning, reads and writers for a specific city will be much faster because all replicas for that city are now located on the nodes closest to the city. - -To check this, let's repeat a few of the read and write queries that we executed before partitioning in [step 12](#step-12-test-performance). - -#### Reads - -Again imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use: - -1. SSH to the instance in `us-east1-b` with the Python client. - -2. Query for the data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ {{page.app}} \ - --host=
\ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'new york' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"] - ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"] - ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"] - ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"] - ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"] - ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"] - ... - - Times (milliseconds): - [20.065784454345703, 7.866144180297852, 8.362054824829102, 9.08803939819336, 7.925987243652344, 7.543087005615234, 7.786035537719727, 8.227825164794922, 7.907867431640625, 7.654905319213867, 7.793903350830078, 7.627964019775391, 7.833957672119141, 7.858037948608398, 7.474184036254883, 9.459972381591797, 7.726192474365234, 7.194995880126953, 7.364034652709961, 7.25102424621582, 7.650852203369141, 7.663965225219727, 9.334087371826172, 7.810115814208984, 7.543087005615234, 7.134914398193359, 7.922887802124023, 7.220029830932617, 7.606029510498047, 7.208108901977539, 7.333993911743164, 7.464170455932617, 7.679939270019531, 7.436990737915039, 7.62486457824707, 7.235050201416016, 7.420063018798828, 7.795095443725586, 7.39598274230957, 7.546901702880859, 7.582187652587891, 7.9669952392578125, 7.418155670166016, 7.539033889770508, 7.805109024047852, 7.086992263793945, 7.069826126098633, 7.833957672119141, 7.43412971496582, 7.035017013549805] - - Median time (milliseconds): - 7.62641429901 - ~~~ - -Before partitioning, this query took a median time of 72.02ms. After partitioning, the query took a median time of only 7.62ms. - -#### Writes - -Now let's again imagine 100 people in New York and 100 people in Seattle and 100 people in New York want to create new Movr accounts: - -1. SSH to the instance in `us-west1-a` with the Python client. - -2. Create 100 Seattle-based users: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [41.8248176574707, 9.701967239379883, 8.725166320800781, 9.058952331542969, 7.819175720214844, 6.247997283935547, 10.265827178955078, 7.627964019775391, 9.120941162109375, 7.977008819580078, 9.247064590454102, 8.929967880249023, 9.610176086425781, 14.40286636352539, 8.588075637817383, 8.67319107055664, 9.417057037353516, 7.652044296264648, 8.917093276977539, 9.135961532592773, 8.604049682617188, 9.220123291015625, 7.578134536743164, 9.096860885620117, 8.942842483520508, 8.63790512084961, 7.722139358520508, 13.59701156616211, 9.176015853881836, 11.484146118164062, 9.212017059326172, 7.563114166259766, 8.793115615844727, 8.80289077758789, 7.827043533325195, 7.6389312744140625, 17.47584342956543, 9.436845779418945, 7.63392448425293, 8.594989776611328, 9.002208709716797, 8.93402099609375, 8.71896743774414, 8.76307487487793, 8.156061172485352, 8.729934692382812, 8.738040924072266, 8.25190544128418, 8.971929550170898, 7.460832595825195, 8.889198303222656, 8.45789909362793, 8.761167526245117, 10.223865509033203, 8.892059326171875, 8.961915969848633, 8.968114852905273, 7.750988006591797, 7.761955261230469, 9.199142456054688, 9.02700424194336, 9.509086608886719, 9.428977966308594, 7.902860641479492, 8.940935134887695, 8.615970611572266, 8.75401496887207, 7.906913757324219, 8.179187774658203, 11.447906494140625, 8.71419906616211, 9.202003479003906, 9.263038635253906, 9.089946746826172, 8.92496109008789, 10.32114028930664, 7.913827896118164, 9.464025497436523, 10.612010955810547, 8.78596305847168, 8.878946304321289, 7.575035095214844, 10.657072067260742, 8.777856826782227, 8.649110794067383, 9.012937545776367, 8.931875228881836, 9.31406021118164, 9.396076202392578, 8.908987045288086, 8.002996444702148, 9.089946746826172, 7.5588226318359375, 8.918046951293945, 12.117862701416016, 7.266998291015625, 8.074045181274414, 8.955001831054688, 8.868932723999023, 8.755922317504883] - - Median time (milliseconds): - 8.90052318573 - ~~~ - - Before partitioning, this query took a median time of 48.40ms. After partitioning, the query took a median time of only 8.90ms. - -3. SSH to the instance in `us-east1-b` with the Python client. - -4. Create 100 new NY-based users: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [276.3068675994873, 9.830951690673828, 8.772134780883789, 9.304046630859375, 8.24880599975586, 7.959842681884766, 7.848978042602539, 7.879018783569336, 7.754087448120117, 10.724067687988281, 13.960123062133789, 9.825944900512695, 9.60993766784668, 9.273052215576172, 9.41920280456543, 8.040904998779297, 16.484975814819336, 10.178089141845703, 8.322000503540039, 9.468793869018555, 8.002042770385742, 9.185075759887695, 9.54294204711914, 9.387016296386719, 9.676933288574219, 13.051986694335938, 9.506940841674805, 12.327909469604492, 10.377168655395508, 15.023946762084961, 9.985923767089844, 7.853031158447266, 9.43303108215332, 9.164094924926758, 10.941028594970703, 9.37199592590332, 12.359857559204102, 8.975028991699219, 7.728099822998047, 8.310079574584961, 9.792089462280273, 9.448051452636719, 8.057117462158203, 9.37795639038086, 9.753942489624023, 9.576082229614258, 8.192062377929688, 9.392023086547852, 7.97581672668457, 8.165121078491211, 9.660959243774414, 8.270978927612305, 9.901046752929688, 8.085966110229492, 10.581016540527344, 9.831905364990234, 7.883787155151367, 8.077859878540039, 8.161067962646484, 10.02812385559082, 7.9898834228515625, 9.840965270996094, 9.452104568481445, 9.747028350830078, 9.003162384033203, 9.206056594848633, 9.274005889892578, 7.8449249267578125, 8.827924728393555, 9.322881698608398, 12.08186149597168, 8.76307487487793, 8.353948593139648, 8.182048797607422, 7.736921310424805, 9.31406021118164, 9.263992309570312, 9.282112121582031, 7.823944091796875, 9.11712646484375, 8.099079132080078, 9.156942367553711, 8.363962173461914, 10.974884033203125, 8.729934692382812, 9.2620849609375, 9.27591323852539, 8.272886276245117, 8.25190544128418, 8.093118667602539, 9.259939193725586, 8.413076400756836, 8.198976516723633, 9.95182991027832, 8.024930953979492, 8.895158767700195, 8.243083953857422, 9.076833724975586, 9.994029998779297, 10.149955749511719] - - Median time (milliseconds): - 9.26303863525 - ~~~ - - Before partitioning, this query took a median time of 116.86ms. After partitioning, the query took a median time of only 9.26ms. diff --git a/src/current/_includes/v21.2/performance/test-performance.md b/src/current/_includes/v21.2/performance/test-performance.md deleted file mode 100644 index 018dbd902ab..00000000000 --- a/src/current/_includes/v21.2/performance/test-performance.md +++ /dev/null @@ -1,146 +0,0 @@ -In general, all of the tuning techniques featured in the single-region scenario above still apply in a multi-region deployment. However, the fact that data and leaseholders are spread across the US means greater latencies in many cases. - -#### Reads - -For example, imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use: - -1. SSH to the instance in `us-east1-b` with the Python client. - -2. Query for the data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ {{page.app}} \ - --host=
\ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'new york' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"] - ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"] - ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"] - ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"] - ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"] - ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"] - ... - - Times (milliseconds): - [933.8209629058838, 72.02410697937012, 72.45206832885742, 72.39294052124023, 72.8158950805664, 72.07584381103516, 72.21412658691406, 71.96712493896484, 71.75517082214355, 72.16811180114746, 71.78592681884766, 72.91603088378906, 71.91109657287598, 71.4719295501709, 72.40676879882812, 71.8080997467041, 71.84004783630371, 71.98500633239746, 72.40891456604004, 73.75001907348633, 71.45905494689941, 71.53081893920898, 71.46596908569336, 72.07608222961426, 71.94995880126953, 71.41804695129395, 71.29096984863281, 72.11899757385254, 71.63381576538086, 71.3050365447998, 71.83194160461426, 71.20394706726074, 70.9981918334961, 72.79205322265625, 72.63493537902832, 72.15285301208496, 71.8698501586914, 72.30591773986816, 71.53582572937012, 72.69001007080078, 72.03006744384766, 72.56317138671875, 71.61688804626465, 72.17121124267578, 70.20092010498047, 72.12018966674805, 73.34589958190918, 73.01592826843262, 71.49410247802734, 72.19099998474121] - - Median time (milliseconds): - 72.0270872116 - ~~~ - -As we saw earlier, the leaseholder for the `vehicles` table is in `us-west2-a` (Los Angeles), so our query had to go from the gateway node in `us-east1-b` all the way to the west coast and then back again before returning data to the client. - -For contrast, imagine we are now a Movr administrator in Los Angeles, and we want to get the IDs and descriptions of all Los Angeles-based bikes that are currently in use: - -1. SSH to the instance in `us-west2-a` with the Python client. - -2. Query for the data: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ {{page.app}} \ - --host=
\ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'los angeles' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['00078349-94d4-43e6-92be-8b0d1ac7ee9f', "{u'color': u'blue', u'brand': u'Merida'}"] - ['003f84c4-fa14-47b2-92d4-35a3dddd2d75', "{u'color': u'red', u'brand': u'Kona'}"] - ['0107a133-7762-4392-b1d9-496eb30ee5f9', "{u'color': u'yellow', u'brand': u'Kona'}"] - ['0144498b-4c4f-4036-8465-93a6bea502a3', "{u'color': u'blue', u'brand': u'Pinarello'}"] - ['01476004-fb10-4201-9e56-aadeb427f98a', "{u'color': u'black', u'brand': u'Merida'}"] - - Times (milliseconds): - [782.6759815216064, 8.564949035644531, 8.226156234741211, 7.949113845825195, 7.86590576171875, 7.842063903808594, 7.674932479858398, 7.555961608886719, 7.642984390258789, 8.024930953979492, 7.717132568359375, 8.46409797668457, 7.520914077758789, 7.6541900634765625, 7.458925247192383, 7.671833038330078, 7.740020751953125, 7.771015167236328, 7.598161697387695, 8.411169052124023, 7.408857345581055, 7.469892501831055, 7.524967193603516, 7.764101028442383, 7.750988006591797, 7.2460174560546875, 6.927967071533203, 7.822990417480469, 7.27391242980957, 7.730960845947266, 7.4710845947265625, 7.4310302734375, 7.33494758605957, 7.455110549926758, 7.021188735961914, 7.083892822265625, 7.812976837158203, 7.625102996826172, 7.447957992553711, 7.179021835327148, 7.504940032958984, 7.224082946777344, 7.257938385009766, 7.714986801147461, 7.4939727783203125, 7.6160430908203125, 7.578849792480469, 7.890939712524414, 7.546901702880859, 7.411956787109375] - - Median time (milliseconds): - 7.6071023941 - ~~~ - -Because the leaseholder for `vehicles` is in the same zone as the client request, this query took just 7.60ms compared to the similar query in New York that took 72.02ms. - -#### Writes - -The geographic distribution of data impacts write performance as well. For example, imagine 100 people in Seattle and 100 people in New York want to create new Movr accounts: - -1. SSH to the instance in `us-west1-a` with the Python client. - -2. Create 100 Seattle-based users: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [277.4538993835449, 50.12702941894531, 47.75214195251465, 48.13408851623535, 47.872066497802734, 48.65407943725586, 47.78695106506348, 49.14689064025879, 52.770137786865234, 49.00097846984863, 48.68602752685547, 47.387123107910156, 47.36208915710449, 47.6841926574707, 46.49209976196289, 47.06096649169922, 46.753883361816406, 46.304941177368164, 48.90894889831543, 48.63715171813965, 48.37393760681152, 49.23295974731445, 50.13418197631836, 48.310041427612305, 48.57516288757324, 47.62911796569824, 47.77693748474121, 47.505855560302734, 47.89996147155762, 49.79205131530762, 50.76479911804199, 50.21500587463379, 48.73299598693848, 47.55592346191406, 47.35088348388672, 46.7071533203125, 43.00808906555176, 43.1060791015625, 46.02813720703125, 47.91092872619629, 68.71294975280762, 49.241065979003906, 48.9039421081543, 47.82295227050781, 48.26998710632324, 47.631025314331055, 64.51892852783203, 48.12812805175781, 67.33417510986328, 48.603057861328125, 50.31013488769531, 51.02396011352539, 51.45716667175293, 50.85396766662598, 49.07512664794922, 47.49894142150879, 44.67201232910156, 43.827056884765625, 44.412851333618164, 46.69189453125, 49.55601692199707, 49.16882514953613, 49.88598823547363, 49.31306838989258, 46.875, 46.69594764709473, 48.31886291503906, 48.378944396972656, 49.0570068359375, 49.417972564697266, 48.22111129760742, 50.662994384765625, 50.58097839355469, 75.44088363647461, 51.05400085449219, 50.85110664367676, 48.187971115112305, 56.7781925201416, 42.47403144836426, 46.2191104888916, 53.96890640258789, 46.697139739990234, 48.99096488952637, 49.1330623626709, 46.34690284729004, 47.09315299987793, 46.39410972595215, 46.51689529418945, 47.58000373840332, 47.924041748046875, 48.426151275634766, 50.22597312927246, 50.1859188079834, 50.37498474121094, 49.861907958984375, 51.477909088134766, 73.09293746948242, 48.779964447021484, 45.13692855834961, 42.2968864440918] - - Median time (milliseconds): - 48.4025478363 - ~~~ - -3. SSH to the instance in `us-east1-b` with the Python client. - -4. Create 100 new NY-based users: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [131.05082511901855, 116.88899993896484, 115.15498161315918, 117.095947265625, 121.04082107543945, 115.8750057220459, 113.80696296691895, 113.05880546569824, 118.41201782226562, 125.30899047851562, 117.5389289855957, 115.23890495300293, 116.84799194335938, 120.0411319732666, 115.62800407409668, 115.08989334106445, 113.37089538574219, 115.15498161315918, 115.96989631652832, 133.1961154937744, 114.25995826721191, 118.09396743774414, 122.24102020263672, 116.14608764648438, 114.80998992919922, 131.9139003753662, 114.54391479492188, 115.15307426452637, 116.7759895324707, 135.10799407958984, 117.18511581420898, 120.15485763549805, 118.0570125579834, 114.52388763427734, 115.28396606445312, 130.00011444091797, 126.45292282104492, 142.69423484802246, 117.60401725769043, 134.08493995666504, 117.47002601623535, 115.75007438659668, 117.98381805419922, 115.83089828491211, 114.88890647888184, 113.23404312133789, 121.1700439453125, 117.84791946411133, 115.35286903381348, 115.0820255279541, 116.99700355529785, 116.67394638061523, 116.1041259765625, 114.67289924621582, 112.98894882202148, 117.1119213104248, 119.78602409362793, 114.57300186157227, 129.58717346191406, 118.37983131408691, 126.68204307556152, 118.30306053161621, 113.27195167541504, 114.22920227050781, 115.80777168273926, 116.81294441223145, 114.76683616638184, 115.1430606842041, 117.29192733764648, 118.24417114257812, 116.56999588012695, 113.8620376586914, 114.88819122314453, 120.80597877502441, 132.39002227783203, 131.00910186767578, 114.56179618835449, 117.03896522521973, 117.72680282592773, 115.6010627746582, 115.27681350708008, 114.52317237854004, 114.87483978271484, 117.78903007507324, 116.65701866149902, 122.6949691772461, 117.65193939208984, 120.5449104309082, 115.61179161071777, 117.54202842712402, 114.70890045166016, 113.58809471130371, 129.7171115875244, 117.57993698120117, 117.1119213104248, 117.64001846313477, 140.66505432128906, 136.41691207885742, 116.24789237976074, 115.19908905029297] - - Median time (milliseconds): - 116.868495941 - ~~~ - -It took 48.40ms to create a user in Seattle and 116.86ms to create a user in New York. To better understand this discrepancy, let's look at the distribution of data for the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql \ -{{page.certs}} \ ---host=
\ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE users;" -~~~ - -~~~ - start_key | end_key | range_id | replicas | lease_holder -+-----------+---------+----------+----------+--------------+ - NULL | NULL | 49 | {2,6,8} | 6 -(1 row) -~~~ - -For the single range containing `users` data, one replica is in each zone, with the leaseholder in the `us-west1-a` zone. This means that: - -- When creating a user in Seattle, the request doesn't have to leave the zone to reach the leaseholder. However, since a write requires consensus from its replica group, the write has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client. -- When creating a user in New York, there are more network hops and, thus, increased latency. The request first needs to travel across the continent to the leaseholder in `us-west1-a`. It then has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client back in the east. diff --git a/src/current/_includes/v21.2/performance/tuning-secure.py b/src/current/_includes/v21.2/performance/tuning-secure.py deleted file mode 100644 index a644dbb1c87..00000000000 --- a/src/current/_includes/v21.2/performance/tuning-secure.py +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env python - -import argparse -import psycopg2 -import time - -parser = argparse.ArgumentParser( - description="test performance of statements against movr database") -parser.add_argument("--host", required=True, - help="ip address of one of the CockroachDB nodes") -parser.add_argument("--statement", required=True, - help="statement to execute") -parser.add_argument("--repeat", type=int, - help="number of times to repeat the statement", default = 20) -parser.add_argument("--times", - help="print time for each repetition of the statement", action="store_true") -parser.add_argument("--cumulative", - help="print cumulative time for all repetitions of the statement", action="store_true") -args = parser.parse_args() - -conn = psycopg2.connect( - database='movr', - user='root', - host=args.host, - port=26257, - sslmode='require', - sslrootcert='certs/ca.crt', - sslkey='certs/client.root.key', - sslcert='certs/client.root.crt' -) -conn.set_session(autocommit=True) -cur = conn.cursor() - -def median(lst): - n = len(lst) - if n < 1: - return None - if n % 2 == 1: - return sorted(lst)[n//2] - else: - return sum(sorted(lst)[n//2-1:n//2+1])/2.0 - -times = list() -for n in range(args.repeat): - start = time.time() - statement = args.statement - cur.execute(statement) - if n < 1: - if cur.description is not None: - colnames = [desc[0] for desc in cur.description] - print("") - print("Result:") - print(colnames) - rows = cur.fetchall() - for row in rows: - print([str(cell) for cell in row]) - end = time.time() - times.append((end - start)* 1000) - -cur.close() -conn.close() - -print("") -if args.times: - print("Times (milliseconds):") - print(times) - print("") -# print("Average time (milliseconds):") -# print(float(sum(times))/len(times)) -# print("") -print("Median time (milliseconds):") -print(median(times)) -print("") -if args.cumulative: - print("Cumulative time (milliseconds):") - print(sum(times)) - print("") diff --git a/src/current/_includes/v21.2/performance/tuning.py b/src/current/_includes/v21.2/performance/tuning.py deleted file mode 100644 index dcb567dad91..00000000000 --- a/src/current/_includes/v21.2/performance/tuning.py +++ /dev/null @@ -1,73 +0,0 @@ -#!/usr/bin/env python - -import argparse -import psycopg2 -import time - -parser = argparse.ArgumentParser( - description="test performance of statements against movr database") -parser.add_argument("--host", required=True, - help="ip address of one of the CockroachDB nodes") -parser.add_argument("--statement", required=True, - help="statement to execute") -parser.add_argument("--repeat", type=int, - help="number of times to repeat the statement", default = 20) -parser.add_argument("--times", - help="print time for each repetition of the statement", action="store_true") -parser.add_argument("--cumulative", - help="print cumulative time for all repetitions of the statement", action="store_true") -args = parser.parse_args() - -conn = psycopg2.connect( - database='movr', - user='root', - host=args.host, - port=26257 -) -conn.set_session(autocommit=True) -cur = conn.cursor() - -def median(lst): - n = len(lst) - if n < 1: - return None - if n % 2 == 1: - return sorted(lst)[n//2] - else: - return sum(sorted(lst)[n//2-1:n//2+1])/2.0 - -times = list() -for n in range(args.repeat): - start = time.time() - statement = args.statement - cur.execute(statement) - if n < 1: - if cur.description is not None: - colnames = [desc[0] for desc in cur.description] - print("") - print("Result:") - print(colnames) - rows = cur.fetchall() - for row in rows: - print([str(cell) for cell in row]) - end = time.time() - times.append((end - start)* 1000) - -cur.close() -conn.close() - -print("") -if args.times: - print("Times (milliseconds):") - print(times) - print("") -# print("Average time (milliseconds):") -# print(float(sum(times))/len(times)) -# print("") -print("Median time (milliseconds):") -print(median(times)) -print("") -if args.cumulative: - print("Cumulative time (milliseconds):") - print(sum(times)) - print("") diff --git a/src/current/_includes/v21.2/performance/use-hash-sharded-indexes.md b/src/current/_includes/v21.2/performance/use-hash-sharded-indexes.md deleted file mode 100644 index 715b378c9bb..00000000000 --- a/src/current/_includes/v21.2/performance/use-hash-sharded-indexes.md +++ /dev/null @@ -1 +0,0 @@ -For performance reasons, we discourage [indexing on sequential keys](indexes.html). If, however, you are working with a table that must be indexed on sequential keys, you should use [hash-sharded indexes](hash-sharded-indexes.html). Hash-sharded indexes distribute sequential traffic uniformly across ranges, eliminating single-range hot spots and improving write performance on sequentially-keyed indexes at a small cost to read performance. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/advertise-addr-join.md b/src/current/_includes/v21.2/prod-deployment/advertise-addr-join.md deleted file mode 100644 index 67019d1fcea..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/advertise-addr-join.md +++ /dev/null @@ -1,4 +0,0 @@ -Flag | Description ------|------------ -`--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). -`--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. diff --git a/src/current/_includes/v21.2/prod-deployment/aws-inbound-rules.md b/src/current/_includes/v21.2/prod-deployment/aws-inbound-rules.md deleted file mode 100644 index 8be748205a6..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/aws-inbound-rules.md +++ /dev/null @@ -1,31 +0,0 @@ -#### Inter-node and load balancer-node communication - - Field | Value --------|------------------- - Port Range | **26257** - Source | The ID of your security group (e.g., *sg-07ab277a*) - -#### Application data - - Field | Value --------|------------------- - Port Range | **26257** - Source | Your application's IP ranges - -#### DB Console - - Field | Value --------|------------------- - Port Range | **8080** - Source | Your network's IP ranges - -You can set your network IP by selecting "My IP" in the Source field. - -#### Load balancer-health check communication - - Field | Value --------|------------------- - Port Range | **8080** - Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) - - To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/backup.sh b/src/current/_includes/v21.2/prod-deployment/backup.sh deleted file mode 100644 index efcbd4c7041..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/backup.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash - -set -euo pipefail - -# This script creates full backups when run on the configured -# day of the week and incremental backups when run on other days, and tracks -# recently created backups in a file to pass as the base for incremental backups. - -what="" # Leave empty for cluster backup, or add "DATABASE database_name" to backup a database. -base="/backups" # The URL where you want to store the backup. -extra="" # Any additional parameters that need to be appended to the BACKUP URI e.g., AWS key params. -recent=recent_backups.txt # File in which recent backups are tracked. -backup_parameters= # e.g., "WITH revision_history" - -# Customize the `cockroach sql` command with `--host`, `--certs-dir` or `--insecure`, `--port`, and additional flags as needed to connect to the SQL client. -runsql() { cockroach sql --insecure -e "$1"; } - -destination="${base}/$(date +"%Y-%V")${extra}" # %V is the week number of the year, with Monday as the first day of the week. - -runsql "BACKUP $what TO '$destination' AS OF SYSTEM TIME '-1m' $backup_parameters" -echo "backed up to ${destination}" diff --git a/src/current/_includes/v21.2/prod-deployment/check-sql-query-performance.md b/src/current/_includes/v21.2/prod-deployment/check-sql-query-performance.md deleted file mode 100644 index 8473d14f5b4..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/check-sql-query-performance.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -If you aren't sure whether SQL query performance needs to be improved on your cluster, see [Identify slow statements](query-behavior-troubleshooting.html#identify-slow-statements). -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/cloud-report.md b/src/current/_includes/v21.2/prod-deployment/cloud-report.md deleted file mode 100644 index aa2a765af6e..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/cloud-report.md +++ /dev/null @@ -1 +0,0 @@ -Cockroach Labs creates a yearly cloud report focused on evaluating hardware performance. For more information, see the [2022 Cloud Report](https://www.cockroachlabs.com/guides/2022-cloud-report/). \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/cluster-unavailable-monitoring.md b/src/current/_includes/v21.2/prod-deployment/cluster-unavailable-monitoring.md deleted file mode 100644 index d4d8803ca1f..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/cluster-unavailable-monitoring.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -If the cluster becomes unavailable, the DB Console and Cluster API will also become unavailable. You can continue to monitor the cluster via the [Prometheus endpoint](monitoring-and-alerting.html#prometheus-endpoint) and [logs](logging-overview.html). -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/healthy-command-commit-latency.md b/src/current/_includes/v21.2/prod-deployment/healthy-command-commit-latency.md deleted file mode 100644 index d055f37aded..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/healthy-command-commit-latency.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: On SSDs ([strongly recommended](recommended-production-settings.html#storage)), this should be between 1 and 100 milliseconds. On HDDs, this should be no more than 1 second. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/healthy-cpu-percent.md b/src/current/_includes/v21.2/prod-deployment/healthy-cpu-percent.md deleted file mode 100644 index a58b0b87973..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/healthy-cpu-percent.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: CPU utilized by CockroachDB should not persistently exceed 80%. Because this metric does not reflect system CPU usage, values above 80% suggest that actual CPU utilization is nearing 100%. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/healthy-crdb-memory.md b/src/current/_includes/v21.2/prod-deployment/healthy-crdb-memory.md deleted file mode 100644 index a0994e08eed..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/healthy-crdb-memory.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: RSS minus Go Total and CGo Total should not exceed 100 MiB. Go Allocated should not exceed a few hundred MiB. CGo Allocated should not exceed the `--cache` size. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/healthy-disk-ops-in-progress.md b/src/current/_includes/v21.2/prod-deployment/healthy-disk-ops-in-progress.md deleted file mode 100644 index e80714df120..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/healthy-disk-ops-in-progress.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: This value should be 0 or single-digit values for short periods of time. If the values persist in double digits, you may have an I/O bottleneck. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/healthy-lsm.md b/src/current/_includes/v21.2/prod-deployment/healthy-lsm.md deleted file mode 100644 index 31fd320af2a..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/healthy-lsm.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: The number of L0 files should **not** be in the high thousands. High values indicate heavy write load that is causing accumulation of files in level 0. These files are not being compacted quickly enough to lower levels, resulting in a misshapen LSM. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/healthy-node-heartbeat-latency.md b/src/current/_includes/v21.2/prod-deployment/healthy-node-heartbeat-latency.md deleted file mode 100644 index ed58182c98f..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/healthy-node-heartbeat-latency.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: Less than 100ms in addition to the [network latency](ui-network-latency-page.html) between nodes in the cluster. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/healthy-read-amplification.md b/src/current/_includes/v21.2/prod-deployment/healthy-read-amplification.md deleted file mode 100644 index c7ffe9c6d17..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/healthy-read-amplification.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: Read amplification factor should be in the single digits. A value exceeding 50 for 1 hour strongly suggests that the LSM tree has an unhealthy shape. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/healthy-sql-memory.md b/src/current/_includes/v21.2/prod-deployment/healthy-sql-memory.md deleted file mode 100644 index 0b963ed55b3..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/healthy-sql-memory.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: This value should not exceed the [`--max-sql-memory`](recommended-production-settings.html#cache-and-sql-memory-size) size. A healthy threshold is 75% of allocated `--max-sql-memory`. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/healthy-storage-capacity.md b/src/current/_includes/v21.2/prod-deployment/healthy-storage-capacity.md deleted file mode 100644 index af6253c932d..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/healthy-storage-capacity.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: Used capacity should not persistently exceed 80% of the total capacity. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/healthy-workload-concurrency.md b/src/current/_includes/v21.2/prod-deployment/healthy-workload-concurrency.md deleted file mode 100644 index 6e8d4891339..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/healthy-workload-concurrency.md +++ /dev/null @@ -1 +0,0 @@ -**Expected values for a healthy cluster**: At any time, the total number of actively executing SQL statements should not exceed 4 times the number of vCPUs in the cluster. For more details, see [Sizing connection pools](connection-pooling.html#sizing-connection-pools). \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/insecure-flag.md b/src/current/_includes/v21.2/prod-deployment/insecure-flag.md deleted file mode 100644 index a13951ba4bc..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/insecure-flag.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -The `--insecure` flag used in this tutorial is intended for non-production testing only. To run CockroachDB in production, use a secure cluster instead. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/prod-deployment/insecure-initialize-cluster.md b/src/current/_includes/v21.2/prod-deployment/insecure-initialize-cluster.md deleted file mode 100644 index 1bf99ee27c0..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/insecure-initialize-cluster.md +++ /dev/null @@ -1,12 +0,0 @@ -On your local machine, complete the node startup process and have them join together as a cluster: - -1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already. - -2. Run the [`cockroach init`](cockroach-init.html) command, with the `--host` flag set to the address of any node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
- ~~~ - - Each node then prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. diff --git a/src/current/_includes/v21.2/prod-deployment/insecure-recommendations.md b/src/current/_includes/v21.2/prod-deployment/insecure-recommendations.md deleted file mode 100644 index e27b3489865..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/insecure-recommendations.md +++ /dev/null @@ -1,13 +0,0 @@ -- Consider using a [secure cluster](manual-deployment.html) instead. Using an insecure cluster comes with risks: - - Your cluster is open to any client that can access any node's IP addresses. - - Any user, even `root`, can log in without providing a password. - - Any user, connecting as `root`, can read or write any data in your cluster. - - There is no network encryption or authentication, and thus no confidentiality. - -- Decide how you want to access your DB Console: - - Access Level | Description - -------------|------------ - Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`. - Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`. - Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the DB Console. diff --git a/src/current/_includes/v21.2/prod-deployment/insecure-requirements.md b/src/current/_includes/v21.2/prod-deployment/insecure-requirements.md deleted file mode 100644 index fb2faee26e8..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/insecure-requirements.md +++ /dev/null @@ -1,9 +0,0 @@ -- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries. - -- Your network configuration must allow TCP communication on the following ports: - - `26257` for intra-cluster and client-cluster communication - - `8080` to expose your DB Console - -- Carefully review the [Production Checklist](recommended-production-settings.html) and recommended [Topology Patterns](topology-patterns.html). - -{% include {{ page.version.version }}/prod-deployment/topology-recommendations.md %} \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/insecure-scale-cluster.md b/src/current/_includes/v21.2/prod-deployment/insecure-scale-cluster.md deleted file mode 100644 index 44b630a2310..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/insecure-scale-cluster.md +++ /dev/null @@ -1,117 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](cockroach-start.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -5. Update your load balancer to recognize the new node. - -
- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown cockroach /var/lib/cockroach - ~~~ - -7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %} - ~~~ - - Save the file in the `/etc/systemd/system/` directory - -8. Customize the sample configuration template for your deployment: - - Specify values for the following flags in the sample configuration template: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - -9. Repeat these steps for each additional node that you want in your cluster. - -
diff --git a/src/current/_includes/v21.2/prod-deployment/insecure-start-nodes.md b/src/current/_includes/v21.2/prod-deployment/insecure-start-nodes.md deleted file mode 100644 index a2f1dc9080e..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/insecure-start-nodes.md +++ /dev/null @@ -1,188 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - - This command primes the node to start, using the following flags: - - Flag | Description - -----|------------ - `--insecure` | Indicates that the cluster is insecure, with no network encryption or authentication. - `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - `--cache`
`--max-sql-memory` | Increases the node's cache size to 25% of available system memory to improve read performance. The capacity for in-memory SQL processing defaults to 25% of system memory but can be raised, if necessary, to increase the number of simultaneous client connections allowed by the node as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). - `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -6. Repeat these steps for each additional node that you want in your cluster. - -
- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -6. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -7. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown cockroach /var/lib/cockroach - ~~~ - -8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service) and save the file in the `/etc/systemd/system/` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %} - ~~~ - -9. In the sample configuration template, specify values for the following flags: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -10. Start the CockroachDB cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ systemctl start insecurecockroachdb - ~~~ - -11. Repeat these steps for each additional node that you want in your cluster. - -{{site.data.alerts.callout_info}} -`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb` -{{site.data.alerts.end}} - -
diff --git a/src/current/_includes/v21.2/prod-deployment/insecure-test-cluster.md b/src/current/_includes/v21.2/prod-deployment/insecure-test-cluster.md deleted file mode 100644 index 9f1d66fad3b..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/insecure-test-cluster.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. - -When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. - -Use the [built-in SQL client](cockroach-sql.html) locally as follows: - -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ - -2. Create an `insecurenodetest` database: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE insecurenodetest; - ~~~ - -3. View the cluster's databases, which will include `insecurenodetest`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | insecurenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -4. Use `\q` to exit the SQL shell. diff --git a/src/current/_includes/v21.2/prod-deployment/insecure-test-load-balancing.md b/src/current/_includes/v21.2/prod-deployment/insecure-test-load-balancing.md deleted file mode 100644 index ae47b5cd160..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/insecure-test-load-balancing.md +++ /dev/null @@ -1,79 +0,0 @@ -CockroachDB comes with a number of [built-in workloads](cockroach-workload.html) for simulating client traffic. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -{{site.data.alerts.callout_info}} -Be sure that you have configured your network to allow traffic from the application to the load balancer. In this case, you will run the sample workload on one of your machines. The traffic source should therefore be the **internal (private)** IP address of that machine. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -For comprehensive guidance on benchmarking CockroachDB with TPC-C, see [Performance Benchmarking](performance-benchmarking-with-tpcc-local.html). -{{site.data.alerts.end}} - -1. SSH to the machine where you want the run the sample TPC-C workload. - - This should be a machine that is not running a CockroachDB node. - -1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -1. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -1. Use the [`cockroach workload`](cockroach-workload.html) command to load the initial schema and data, pointing it at the IP address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init tpcc \ - 'postgresql://root@:26257/tpcc?sslmode=disable' - ~~~ - -1. Use the `cockroach workload` command to run the workload for 10 minutes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run tpcc \ - --duration=10m \ - 'postgresql://root@:26257/tpcc?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 1443.4 1494.8 4.7 9.4 27.3 67.1 transfer - 2s 0 1686.5 1590.9 4.7 8.1 15.2 28.3 transfer - 3s 0 1735.7 1639.0 4.7 7.3 11.5 28.3 transfer - 4s 0 1542.6 1614.9 5.0 8.9 12.1 21.0 transfer - 5s 0 1695.9 1631.1 4.7 7.3 11.5 22.0 transfer - 6s 0 1569.2 1620.8 5.0 8.4 11.5 15.7 transfer - 7s 0 1614.6 1619.9 4.7 8.1 12.1 16.8 transfer - 8s 0 1344.4 1585.6 5.8 10.0 15.2 31.5 transfer - 9s 0 1351.9 1559.5 5.8 10.0 16.8 54.5 transfer - 10s 0 1514.8 1555.0 5.2 8.1 12.1 16.8 transfer - ... - ~~~ - - After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 600.0s 0 823902 1373.2 5.8 5.5 10.0 15.2 209.7 - ~~~ - - {{site.data.alerts.callout_success}} - For more `tpcc` options, use `cockroach workload run tpcc --help`. For details about other workloads built into the `cockroach` binary, use `cockroach workload --help`. - {{site.data.alerts.end}} - -1. To monitor the load generator's progress, open the [DB Console](ui-overview.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes. diff --git a/src/current/_includes/v21.2/prod-deployment/insecurecockroachdb.service b/src/current/_includes/v21.2/prod-deployment/insecurecockroachdb.service deleted file mode 100644 index b027b941009..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/insecurecockroachdb.service +++ /dev/null @@ -1,16 +0,0 @@ -[Unit] -Description=Cockroach Database cluster node -Requires=network.target -[Service] -Type=notify -WorkingDirectory=/var/lib/cockroach -ExecStart=/usr/local/bin/cockroach start --insecure --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25 -TimeoutStopSec=60 -Restart=always -RestartSec=10 -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=cockroach -User=cockroach -[Install] -WantedBy=default.target diff --git a/src/current/_includes/v21.2/prod-deployment/join-flag-multi-region.md b/src/current/_includes/v21.2/prod-deployment/join-flag-multi-region.md deleted file mode 100644 index 93ae34a8716..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/join-flag-multi-region.md +++ /dev/null @@ -1 +0,0 @@ -When starting a multi-region cluster, set more than one `--join` address per region, and select nodes that are spread across failure domains. This ensures [high availability](architecture/replication-layer.html#overview). \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/join-flag-single-region.md b/src/current/_includes/v21.2/prod-deployment/join-flag-single-region.md deleted file mode 100644 index 99250cdfee9..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/join-flag-single-region.md +++ /dev/null @@ -1 +0,0 @@ -For a cluster in a single region, set 3-5 `--join` addresses. Each starting node will attempt to contact one of the join hosts. In case a join host cannot be reached, the node will try another address on the list until it can join the gossip network. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/monitor-cluster.md b/src/current/_includes/v21.2/prod-deployment/monitor-cluster.md deleted file mode 100644 index 363ef1167c1..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/monitor-cluster.md +++ /dev/null @@ -1,3 +0,0 @@ -Despite CockroachDB's various [built-in safeguards against failure](frequently-asked-questions.html#how-does-cockroachdb-survive-failures), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention. - -For details about available monitoring options and the most important events and metrics to alert on, see [Monitoring and Alerting](monitoring-and-alerting.html). diff --git a/src/current/_includes/v21.2/prod-deployment/process-termination.md b/src/current/_includes/v21.2/prod-deployment/process-termination.md deleted file mode 100644 index 23f9310572b..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/process-termination.md +++ /dev/null @@ -1,13 +0,0 @@ -{{site.data.alerts.callout_danger}} -We do not recommend sending `SIGKILL` to perform a "hard" shutdown, which bypasses CockroachDB's [node shutdown logic](#node-shutdown-sequence) and forcibly terminates the process. This can corrupt log files and, in certain edge cases, can result in temporary data unavailability, latency spikes, uncertainty errors, ambiguous commit errors, or query timeouts. When decommissioning, a hard shutdown will leave ranges under-replicated and vulnerable to another node failure, causing [quorum](architecture/replication-layer.html#overview) loss in the window before up-replication completes. -{{site.data.alerts.end}} - -- On production deployments, use the process manager to send `SIGTERM` to the process. - - - For example, with [`systemd`](https://www.freedesktop.org/wiki/Software/systemd/), run `systemctl stop {systemd config filename}`. - -- When using CockroachDB for local testing: - - - When running a server on the foreground, use `ctrl-c` in the terminal to send `SIGINT` to the process. - - - When running with the [`--background` flag](cockroach-start.html#general), use `pkill`, `kill`, or look up the process ID with `ps -ef | grep cockroach | grep -v grep` and then run `kill -TERM {process ID}`. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/prod-guidance-cache-max-sql-memory.md b/src/current/_includes/v21.2/prod-deployment/prod-guidance-cache-max-sql-memory.md deleted file mode 100644 index 0a6b979c581..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/prod-guidance-cache-max-sql-memory.md +++ /dev/null @@ -1 +0,0 @@ -For production deployments, set `--cache` to `25%` or higher. Avoid setting `--cache` and `--max-sql-memory` to a combined value of more than 75% of a machine's total RAM. Doing so increases the risk of memory-related failures. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/prod-guidance-connection-pooling.md b/src/current/_includes/v21.2/prod-deployment/prod-guidance-connection-pooling.md deleted file mode 100644 index 17b87a9988b..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/prod-guidance-connection-pooling.md +++ /dev/null @@ -1 +0,0 @@ -The total number of workload connections across all connection pools **should not exceed 4 times the number of vCPUs** in the cluster by a large amount. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/prod-guidance-disable-swap.md b/src/current/_includes/v21.2/prod-deployment/prod-guidance-disable-swap.md deleted file mode 100644 index f988eb016d4..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/prod-guidance-disable-swap.md +++ /dev/null @@ -1 +0,0 @@ -Disable Linux memory swapping. Over-allocating memory on production machines can lead to unexpected performance issues when pages have to be read back into memory. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/prod-guidance-larger-nodes.md b/src/current/_includes/v21.2/prod-deployment/prod-guidance-larger-nodes.md deleted file mode 100644 index c165a0130b7..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/prod-guidance-larger-nodes.md +++ /dev/null @@ -1 +0,0 @@ -To optimize for throughput, use larger nodes with up to 32 vCPUs. To further increase throughput, add more nodes to the cluster instead of increasing node size. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/prod-guidance-log-volume.md b/src/current/_includes/v21.2/prod-deployment/prod-guidance-log-volume.md deleted file mode 100644 index 7cc1a26ece7..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/prod-guidance-log-volume.md +++ /dev/null @@ -1 +0,0 @@ -Store CockroachDB [log files](configure-logs.html#logging-directory) in a separate volume from the main data store so that logging is not impacted by I/O throttling. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/prod-guidance-lvm.md b/src/current/_includes/v21.2/prod-deployment/prod-guidance-lvm.md deleted file mode 100644 index c1cd5885f1e..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/prod-guidance-lvm.md +++ /dev/null @@ -1 +0,0 @@ -Do not use LVM in the I/O path. Dynamically resizing CockroachDB store volumes can result in significant performance degradation. Using LVM snapshots in lieu of CockroachDB [backup and restore](take-full-and-incremental-backups.html) is also not supported. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/prod-guidance-store-volume.md b/src/current/_includes/v21.2/prod-deployment/prod-guidance-store-volume.md deleted file mode 100644 index c957422ce07..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/prod-guidance-store-volume.md +++ /dev/null @@ -1 +0,0 @@ -Use dedicated volumes for the CockroachDB [store](cockroach-start.html#store). Do not share the store volume with any other I/O activity. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/prod-see-also.md b/src/current/_includes/v21.2/prod-deployment/prod-see-also.md deleted file mode 100644 index 42ec5cd32c0..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/prod-see-also.md +++ /dev/null @@ -1,7 +0,0 @@ -- [Production Checklist](recommended-production-settings.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](kubernetes-overview.html) -- [Monitoring and Alerting](monitoring-and-alerting.html) -- [Performance Benchmarking](performance-benchmarking-with-tpcc-small.html) -- [Performance Tuning](performance-best-practices-overview.html) -- [Local Deployment](start-a-local-cluster.html) diff --git a/src/current/_includes/v21.2/prod-deployment/provision-cpu.md b/src/current/_includes/v21.2/prod-deployment/provision-cpu.md deleted file mode 100644 index 48896a432cd..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/provision-cpu.md +++ /dev/null @@ -1 +0,0 @@ -{% if include.threshold == "absolute_minimum" %}**4 vCPUs**{% elsif include.threshold == "minimum" %}**8 vCPUs**{% elsif include.threshold == "maximum" %}**32 vCPUs**{% endif %} diff --git a/src/current/_includes/v21.2/prod-deployment/provision-disk-io.md b/src/current/_includes/v21.2/prod-deployment/provision-disk-io.md deleted file mode 100644 index 2ece47203e4..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/provision-disk-io.md +++ /dev/null @@ -1 +0,0 @@ -**500 IOPS and 30 MB/s per vCPU** \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/provision-memory.md b/src/current/_includes/v21.2/prod-deployment/provision-memory.md deleted file mode 100644 index 98136337374..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/provision-memory.md +++ /dev/null @@ -1 +0,0 @@ -**4 GiB of RAM per vCPU** \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/provision-storage.md b/src/current/_includes/v21.2/prod-deployment/provision-storage.md deleted file mode 100644 index b5254ea3915..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/provision-storage.md +++ /dev/null @@ -1 +0,0 @@ -**150 GiB per vCPU** \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/recommended-instances-aws.md b/src/current/_includes/v21.2/prod-deployment/recommended-instances-aws.md deleted file mode 100644 index 87d0f53e95c..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/recommended-instances-aws.md +++ /dev/null @@ -1,7 +0,0 @@ -- Use general-purpose [`m6i` or `m6a`](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/general-purpose-instances.html) VMs with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html). For example, Cockroach Labs has used `m6i.2xlarge` for performance benchmarking. If your workload requires high throughput, use network-optimized `m5n` instances. To simulate bare-metal deployments, use `m5d` with [SSD Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). - - - `m5` and `m5a` instances, and [compute-optimized `c5`, `c5a`, and `c5n`](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compute-optimized-instances.html) instances, are also acceptable. - - {{site.data.alerts.callout_danger}} - **Do not** use [burstable performance instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html), which limit the load on a single core. - {{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/prod-deployment/recommended-instances-azure.md b/src/current/_includes/v21.2/prod-deployment/recommended-instances-azure.md deleted file mode 100644 index 74263dbe9d0..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/recommended-instances-azure.md +++ /dev/null @@ -1,7 +0,0 @@ -- Use general-purpose [Dsv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/dv5-dsv5-series) and [Dasv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series) or memory-optimized [Ev5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/ev5-esv5-series) and [Easv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/easv5-eadsv5-series#easv5-series) VMs. For example, Cockroach Labs has used `Standard_D8s_v5`, `Standard_D8as_v5`, `Standard_E8s_v5`, and `Standard_e8as_v5` for performance benchmarking. - - - Compute-optimized [F-series](https://docs.microsoft.com/en-us/azure/virtual-machines/fsv2-series) VMs are also acceptable. - - {{site.data.alerts.callout_danger}} - Do not use ["burstable" B-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/b-series-burstable) VMs, which limit the load on CPU resources. Also, Cockroach Labs has experienced data corruption issues on A-series VMs and irregular disk performance on D-series VMs, so we recommend avoiding those as well. - {{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/recommended-instances-gcp.md b/src/current/_includes/v21.2/prod-deployment/recommended-instances-gcp.md deleted file mode 100644 index 6dbe048cd16..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/recommended-instances-gcp.md +++ /dev/null @@ -1,5 +0,0 @@ -- Use general-purpose [`t2d-standard`, `n2-standard`, or `n2d-standard`](https://cloud.google.com/compute/pricing#predefined_machine_types) VMs, or use [custom VMs](https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type). For example, Cockroach Labs has used `t2d-standard-8`, `n2-standard-8`, and `n2d-standard-8` for performance benchmarking. - - {{site.data.alerts.callout_danger}} - Do not use `f1` or `g1` [shared-core machines](https://cloud.google.com/compute/docs/machine-types#sharedcore), which limit the load on CPU resources. - {{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/resolution-excessive-concurrency.md b/src/current/_includes/v21.2/prod-deployment/resolution-excessive-concurrency.md deleted file mode 100644 index 8d776db1dba..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/resolution-excessive-concurrency.md +++ /dev/null @@ -1 +0,0 @@ -To prevent issues with workload concurrency, [provision sufficient CPU](recommended-production-settings.html#sizing) and use [connection pooling](recommended-production-settings.html#connection-pooling) for the workload. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/resolution-inverted-lsm.md b/src/current/_includes/v21.2/prod-deployment/resolution-inverted-lsm.md deleted file mode 100644 index 3ae9fb03626..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/resolution-inverted-lsm.md +++ /dev/null @@ -1 +0,0 @@ -If LSM compaction falls behind, throttle your workload concurrency to allow compaction to catch up and restore a healthy LSM shape. {% include {{ page.version.version }}/prod-deployment/prod-guidance-connection-pooling.md %} If a node is severely impacted, you can [start a new node](cockroach-start.html) and then [decommission the problematic node](node-shutdown.html?filters=decommission#remove-nodes). \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/resolution-oom-crash.md b/src/current/_includes/v21.2/prod-deployment/resolution-oom-crash.md deleted file mode 100644 index b2c6c96e356..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/resolution-oom-crash.md +++ /dev/null @@ -1 +0,0 @@ -To prevent OOM crashes, [provision sufficient memory](recommended-production-settings.html#memory). If all CockroachDB machines are provisioned and configured correctly, either run the CockroachDB process on another node with sufficient memory, or [reduce the memory allocated to CockroachDB](recommended-production-settings.html#cache-and-sql-memory-size). \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/resolution-untuned-query.md b/src/current/_includes/v21.2/prod-deployment/resolution-untuned-query.md deleted file mode 100644 index e81ff66a53b..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/resolution-untuned-query.md +++ /dev/null @@ -1 +0,0 @@ -If you find queries that are consuming too much memory, [cancel the queries](manage-long-running-queries.html#cancel-long-running-queries) to free up memory usage. For information on optimizing query performance, see [SQL Performance Best Practices](performance-best-practices-overview.html). \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/secure-generate-certificates.md b/src/current/_includes/v21.2/prod-deployment/secure-generate-certificates.md deleted file mode 100644 index 9870de5b0cf..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/secure-generate-certificates.md +++ /dev/null @@ -1,201 +0,0 @@ -You can use `cockroach cert` commands or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates. This section features the `cockroach cert` commands. - -Locally, you'll need to [create the following certificates and keys](cockroach-cert.html): - -- A certificate authority (CA) key pair (`ca.crt` and `ca.key`). -- A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers. -- A client key pair for the `root` user. You'll use this to run a sample workload against the cluster as well as some `cockroach` client commands from your local machine. - -{{site.data.alerts.callout_success}}Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.{{site.data.alerts.end}} - -1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already. - -2. Create two directories: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes. - - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes. - -3. Create the CA certificate and key: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -5. Upload the CA certificate and node certificate and key to the first node: - - {% if page.title contains "Google" %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ gcloud compute ssh \ - --project \ - --command "mkdir certs" - ~~~ - - {{site.data.alerts.callout_info}} - `gcloud compute ssh` associates your public SSH key with the GCP project and is only needed when connecting to the first node. See the [GCP docs](https://cloud.google.com/sdk/gcloud/reference/compute/ssh) for more details. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% elsif page.title contains "AWS" %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh-add /path/.pem - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% else %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - {% endif %} - -6. Delete the local copy of the node certificate and key: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ rm certs/node.crt certs/node.key - ~~~ - - {{site.data.alerts.callout_info}} - This is necessary because the certificates and keys for additional nodes will also be named `node.crt` and `node.key`. As an alternative to deleting these files, you can run the next `cockroach cert create-node` commands with the `--overwrite` flag. - {{site.data.alerts.end}} - -7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -8. Upload the CA certificate and node certificate and key to the second node: - - {% if page.title contains "AWS" %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% else %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - {% endif %} - -9. Repeat steps 6 - 8 for each additional node. - -10. Create a client certificate and key for the `root` user: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/client.root.crt \ - certs/client.root.key \ - @:~/certs - ~~~ - - In later steps, you'll also use the `root` user's certificate to run [`cockroach`](cockroach-commands.html) client commands from your local machine. If you might also want to run `cockroach` client commands directly on a node (e.g., for local debugging), you'll need to copy the `root` user's certificate and key to that node as well. - -{{site.data.alerts.callout_info}} -On accessing the DB Console in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-db-console-for-a-secure-cluster). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/prod-deployment/secure-initialize-cluster.md b/src/current/_includes/v21.2/prod-deployment/secure-initialize-cluster.md deleted file mode 100644 index fc92a82b724..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/secure-initialize-cluster.md +++ /dev/null @@ -1,8 +0,0 @@ -On your local machine, run the [`cockroach init`](cockroach-init.html) command to complete the node startup process and have them join together as a cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach init --certs-dir=certs --host=
-~~~ - -After running this command, each node prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. diff --git a/src/current/_includes/v21.2/prod-deployment/secure-recommendations.md b/src/current/_includes/v21.2/prod-deployment/secure-recommendations.md deleted file mode 100644 index 528850dbbb0..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/secure-recommendations.md +++ /dev/null @@ -1,7 +0,0 @@ -- Decide how you want to access your DB Console: - - Access Level | Description - -------------|------------ - Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`. - Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`. - Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the DB Console. diff --git a/src/current/_includes/v21.2/prod-deployment/secure-requirements.md b/src/current/_includes/v21.2/prod-deployment/secure-requirements.md deleted file mode 100644 index 5c35b0898c8..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/secure-requirements.md +++ /dev/null @@ -1,11 +0,0 @@ -- You must have [CockroachDB installed](install-cockroachdb.html) locally. This is necessary for generating and managing your deployment's certificates. - -- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries. - -- Your network configuration must allow TCP communication on the following ports: - - `26257` for intra-cluster and client-cluster communication - - `8080` to expose your DB Console - -- Carefully review the [Production Checklist](recommended-production-settings.html), including supported hardware and software, and the recommended [Topology Patterns](topology-patterns.html). - -{% include {{ page.version.version }}/prod-deployment/topology-recommendations.md %} \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/secure-scale-cluster.md b/src/current/_includes/v21.2/prod-deployment/secure-scale-cluster.md deleted file mode 100644 index 55af10fc740..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/secure-scale-cluster.md +++ /dev/null @@ -1,124 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](cockroach-start.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -5. Update your load balancer to recognize the new node. - -
- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Move the `certs` directory to the `cockroach` directory. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mv certs /var/lib/cockroach/ - ~~~ - -7. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown -R cockroach /var/lib/cockroach - ~~~ - -8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %} - ~~~ - - Save the file in the `/etc/systemd/system/` directory. - -9. Customize the sample configuration template for your deployment: - - Specify values for the following flags in the sample configuration template: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - -10. Repeat these steps for each additional node that you want in your cluster. - -
diff --git a/src/current/_includes/v21.2/prod-deployment/secure-start-nodes.md b/src/current/_includes/v21.2/prod-deployment/secure-start-nodes.md deleted file mode 100644 index abe72cdbc39..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/secure-start-nodes.md +++ /dev/null @@ -1,195 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - - This command primes the node to start, using the following flags: - - Flag | Description - -----|------------ - `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `node.crt` and `node.key` files for the node. - `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - `--cache`
`--max-sql-memory` | Increases the node's cache size to 25% of available system memory to improve read performance. The capacity for in-memory SQL processing defaults to 25% of system memory but can be raised, if necessary, to increase the number of simultaneous client connections allowed by the node as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). - `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain [{{ site.data.products.enterprise }} features](enterprise-licensing.html). For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=:8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -6. Repeat these steps for each additional node that you want in your cluster. - -
- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -6. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -7. Move the `certs` directory to the `cockroach` directory. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mv certs /var/lib/cockroach/ - ~~~ - -8. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown -R cockroach /var/lib/cockroach - ~~~ - -9. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service) and save the file in the `/etc/systemd/system/` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %} - ~~~ - -10. In the sample configuration template, specify values for the following flags: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain [{{ site.data.products.enterprise }} features](enterprise-licensing.html). For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -11. Start the CockroachDB cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ systemctl start securecockroachdb - ~~~ - -11. Repeat these steps for each additional node that you want in your cluster. - -{{site.data.alerts.callout_info}} -`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop securecockroachdb` -{{site.data.alerts.end}} - -
diff --git a/src/current/_includes/v21.2/prod-deployment/secure-test-cluster.md b/src/current/_includes/v21.2/prod-deployment/secure-test-cluster.md deleted file mode 100644 index cbd81488b0d..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/secure-test-cluster.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. - -When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. - -Use the [built-in SQL client](cockroach-sql.html) locally as follows: - -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=
- ~~~ - -2. Create a `securenodetest` database: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE securenodetest; - ~~~ - -3. View the cluster's databases, which will include `securenodetest`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | securenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -4. Use `\q` to exit the SQL shell. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/secure-test-load-balancing.md b/src/current/_includes/v21.2/prod-deployment/secure-test-load-balancing.md deleted file mode 100644 index 2fb26c9e276..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/secure-test-load-balancing.md +++ /dev/null @@ -1,79 +0,0 @@ -CockroachDB comes with a number of [built-in workloads](cockroach-workload.html) for simulating client traffic. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -{{site.data.alerts.callout_info}} -Be sure that you have configured your network to allow traffic from the application to the load balancer. In this case, you will run the sample workload on one of your machines. The traffic source should therefore be the **internal (private)** IP address of that machine. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -For comprehensive guidance on benchmarking CockroachDB with TPC-C, see [Performance Benchmarking](performance-benchmarking-with-tpcc-local.html). -{{site.data.alerts.end}} - -1. SSH to the machine where you want to run the sample TPC-C workload. - - This should be a machine that is not running a CockroachDB node, and it should already have a `certs` directory containing `ca.crt`, `client.root.crt`, and `client.root.key` files. - -1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -1. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -1. Use the [`cockroach workload`](cockroach-workload.html) command to load the initial schema and data, pointing it at the IP address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init tpcc \ - 'postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key' - ~~~ - -1. Use the `cockroach workload` command to run the workload for 10 minutes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run tpcc \ - --duration=10m \ - 'postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 1443.4 1494.8 4.7 9.4 27.3 67.1 transfer - 2s 0 1686.5 1590.9 4.7 8.1 15.2 28.3 transfer - 3s 0 1735.7 1639.0 4.7 7.3 11.5 28.3 transfer - 4s 0 1542.6 1614.9 5.0 8.9 12.1 21.0 transfer - 5s 0 1695.9 1631.1 4.7 7.3 11.5 22.0 transfer - 6s 0 1569.2 1620.8 5.0 8.4 11.5 15.7 transfer - 7s 0 1614.6 1619.9 4.7 8.1 12.1 16.8 transfer - 8s 0 1344.4 1585.6 5.8 10.0 15.2 31.5 transfer - 9s 0 1351.9 1559.5 5.8 10.0 16.8 54.5 transfer - 10s 0 1514.8 1555.0 5.2 8.1 12.1 16.8 transfer - ... - ~~~ - - After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 600.0s 0 823902 1373.2 5.8 5.5 10.0 15.2 209.7 - ~~~ - - {{site.data.alerts.callout_success}} - For more `tpcc` options, use `cockroach workload run tpcc --help`. For details about other workloads built into the `cockroach` binary, use `cockroach workload --help`. - {{site.data.alerts.end}} - -1. To monitor the load generator's progress, open the [DB Console](ui-overview.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes. diff --git a/src/current/_includes/v21.2/prod-deployment/securecockroachdb.service b/src/current/_includes/v21.2/prod-deployment/securecockroachdb.service deleted file mode 100644 index 39054cf2e1d..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/securecockroachdb.service +++ /dev/null @@ -1,16 +0,0 @@ -[Unit] -Description=Cockroach Database cluster node -Requires=network.target -[Service] -Type=notify -WorkingDirectory=/var/lib/cockroach -ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25 -TimeoutStopSec=60 -Restart=always -RestartSec=10 -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=cockroach -User=cockroach -[Install] -WantedBy=default.target diff --git a/src/current/_includes/v21.2/prod-deployment/synchronize-clocks.md b/src/current/_includes/v21.2/prod-deployment/synchronize-clocks.md deleted file mode 100644 index ecd82f67d17..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/synchronize-clocks.md +++ /dev/null @@ -1,179 +0,0 @@ -CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node. - -{% if page.title contains "Digital Ocean" or page.title contains "On-Premises" %} - -[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well. - -1. SSH to the first machine. - -2. Disable `timesyncd`, which tends to be active by default on some Linux distributions: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo timedatectl set-ntp no - ~~~ - - Verify that `timesyncd` is off: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ timedatectl - ~~~ - - Look for `Network time on: no` or `NTP enabled: no` in the output. - -3. Install the `ntp` package: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install ntp - ~~~ - -4. Stop the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp stop - ~~~ - -5. Sync the machine's clock with Google's NTP service: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpd -b time.google.com - ~~~ - - To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines: - - {% include_cached copy-clipboard.html %} - ~~~ - server time1.google.com iburst - server time2.google.com iburst - server time3.google.com iburst - server time4.google.com iburst - ~~~ - - Restart the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp start - ~~~ - - {{site.data.alerts.callout_info}} - We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - {{site.data.alerts.end}} - -6. Verify that the machine is using a Google NTP server: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpq -p - ~~~ - - The active NTP server will be marked with an asterisk. - -7. Repeat these steps for each machine where a CockroachDB node will run. - -{% elsif page.title contains "Google" %} - -Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - -- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - -{% elsif page.title contains "AWS" %} - -Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second. - -- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). - - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. - - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - -{% elsif page.title contains "Azure" %} - -[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems. - -1. SSH to the first machine. - -2. Find the ID of the Hyper-V Time Synchronization device: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ python lsvmbus -vv | grep -w "Time Synchronization" -A 3 - ~~~ - - ~~~ - VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization] - Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee} - Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee - Rel_ID=12, target_cpu=0 - ~~~ - -3. Unbind the device, using the `Device_ID` from the previous command's output: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ echo | sudo tee /sys/bus/vmbus/drivers/hv_utils/unbind - ~~~ - -4. Install the `ntp` package: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install ntp - ~~~ - -5. Stop the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp stop - ~~~ - -6. Sync the machine's clock with Google's NTP service: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpd -b time.google.com - ~~~ - - To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines: - - {% include_cached copy-clipboard.html %} - ~~~ - server time1.google.com iburst - server time2.google.com iburst - server time3.google.com iburst - server time4.google.com iburst - ~~~ - - Restart the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp start - ~~~ - - {{site.data.alerts.callout_info}} - We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - {{site.data.alerts.end}} - -7. Verify that the machine is using a Google NTP server: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpq -p - ~~~ - - The active NTP server will be marked with an asterisk. - -8. Repeat these steps for each machine where a CockroachDB node will run. - -{% endif %} diff --git a/src/current/_includes/v21.2/prod-deployment/terminology-vcpu.md b/src/current/_includes/v21.2/prod-deployment/terminology-vcpu.md deleted file mode 100644 index 790ce37a2b9..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/terminology-vcpu.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -In our sizing and production guidance, 1 vCPU is considered equivalent to 1 core in the underlying hardware platform. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/topology-recommendations.md b/src/current/_includes/v21.2/prod-deployment/topology-recommendations.md deleted file mode 100644 index 31384079cec..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/topology-recommendations.md +++ /dev/null @@ -1,19 +0,0 @@ -- Run each node on a separate machine. Since CockroachDB replicates across nodes, running more than one node per machine increases the risk of data loss if a machine fails. Likewise, if a machine has multiple disks or SSDs, run one node with multiple `--store` flags and not one node per disk. For more details about stores, see [Start a Node](cockroach-start.html#store). - -- When starting each node, use the [`--locality`](cockroach-start.html#locality) flag to describe the node's location, for example, `--locality=region=west,zone=us-west-1`. The key-value pairs should be ordered from most to least inclusive, and the keys and order of key-value pairs must be the same on all nodes. - -- When deploying in a single availability zone: - - - To be able to tolerate the failure of any 1 node, use at least 3 nodes with the [`default` 3-way replication factor](configure-replication-zones.html#view-the-default-replication-zone). In this case, if 1 node fails, each range retains 2 of its 3 replicas, a majority. - - - To be able to tolerate 2 simultaneous node failures, use at least 5 nodes and [increase the `default` replication factor for user data](configure-replication-zones.html#edit-the-default-replication-zone) to 5. The replication factor for [important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) is 5 by default, so no adjustments are needed for internal data. In this case, if 2 nodes fail at the same time, each range retains 3 of its 5 replicas, a majority. - -- When deploying across multiple availability zones: - - - To be able to tolerate the failure of 1 entire AZ in a region, use at least 3 AZs per region and set `--locality` on each node to spread data evenly across regions and AZs. In this case, if 1 AZ goes offline, the 2 remaining AZs retain a majority of replicas. - - - To ensure that ranges are split evenly across nodes, use the same number of nodes in each AZ. This is to avoid overloading any nodes with excessive resource consumption. - -- When deploying across multiple regions: - - - To be able to tolerate the failure of 1 entire region, use at least 3 regions. \ No newline at end of file diff --git a/src/current/_includes/v21.2/prod-deployment/use-cluster.md b/src/current/_includes/v21.2/prod-deployment/use-cluster.md deleted file mode 100644 index 0e65c9fb94c..00000000000 --- a/src/current/_includes/v21.2/prod-deployment/use-cluster.md +++ /dev/null @@ -1,12 +0,0 @@ -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -1. [Create users](create-user.html) and [grant them privileges](grant.html). -1. [Connect your application](install-client-drivers.html). Be sure to connect your application to the load balancer, not to a CockroachDB node. -1. [Take backups](take-full-and-incremental-backups.html) of your data. - -You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases and tables differently. For more information, see [Configure Replication Zones](configure-replication-zones.html). - -{{site.data.alerts.callout_danger}} -When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/setup/create-a-free-cluster.md b/src/current/_includes/v21.2/setup/create-a-free-cluster.md deleted file mode 100644 index 101a57da5e0..00000000000 --- a/src/current/_includes/v21.2/setup/create-a-free-cluster.md +++ /dev/null @@ -1,7 +0,0 @@ -1. If you haven't already, sign up for a CockroachDB {{ site.data.products.cloud }} account. -1. [Log in](https://cockroachlabs.cloud/) to your CockroachDB {{ site.data.products.cloud }} account. -1. On the **Clusters** page, click **Create Cluster**. -1. On the **Create your cluster** page, select **Serverless**. -1. Click **Create cluster**. - - Your cluster will be created in a few seconds and the **Create SQL user** dialog will display. \ No newline at end of file diff --git a/src/current/_includes/v21.2/setup/create-first-sql-user.md b/src/current/_includes/v21.2/setup/create-first-sql-user.md deleted file mode 100644 index a1e46b5694b..00000000000 --- a/src/current/_includes/v21.2/setup/create-first-sql-user.md +++ /dev/null @@ -1,8 +0,0 @@ -The **Create SQL user** dialog allows you to create a new SQL user and password. - -1. Enter a username in the **SQL user** field or use the one provided by default. -1. Click **Generate & save password**. -1. Copy the generated password and save it in a secure location. -1. Click **Next**. - - Currently, all new users are created with full privileges. For more information and to change the default settings, see [[Manage SQL users on a cluster](../cockroachcloud/managing-access.html#manage-sql-users-on-a-cluster). \ No newline at end of file diff --git a/src/current/_includes/v21.2/setup/init-bank-sample.md b/src/current/_includes/v21.2/setup/init-bank-sample.md deleted file mode 100644 index 86d09a9068f..00000000000 --- a/src/current/_includes/v21.2/setup/init-bank-sample.md +++ /dev/null @@ -1,38 +0,0 @@ -1. Set the `DATABASE_URL` environment variable to the connection string for your cluster: - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - export DATABASE_URL="postgresql://root@localhost:26257?sslmode=disable" - ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - export DATABASE_URL="{connection-string}" - ~~~ - - Where `{connection-string}` is the connection string you obtained from the CockroachDB {{ site.data.products.cloud }} Console. - -
- - -1. To initialize the example database, use the [`cockroach sql`](cockroach-sql.html) command to execute the SQL statements in the `dbinit.sql` file: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cat dbinit.sql | cockroach sql --url $DATABASE_URL - ~~~ - - The SQL statement in the initialization file should execute: - - ~~~ - CREATE TABLE - - - Time: 102ms - ~~~ diff --git a/src/current/_includes/v21.2/setup/sample-setup-certs.md b/src/current/_includes/v21.2/setup/sample-setup-certs.md deleted file mode 100644 index 7bd64f561a1..00000000000 --- a/src/current/_includes/v21.2/setup/sample-setup-certs.md +++ /dev/null @@ -1,38 +0,0 @@ - -
- - -
- -
- -### Create a free cluster - -{% include {{ page.version.version }}/setup/create-a-free-cluster.md %} - -### Create a SQL user - -{% include {{ page.version.version }}/setup/create-first-sql-user.md %} - -### Get the root certificate - -The **Connect to cluster** dialog shows information about how to connect to your cluster. - -1. Select **General connection string** from the **Select option** dropdown. -1. Open a new terminal on your local machine, and run the **CA Cert download command** provided in the **Download CA Cert** section. The client driver used in this tutorial requires this certificate to connect to CockroachDB {{ site.data.products.cloud }}. - -### Get the connection string - -Open the **General connection string** section, then copy the connection string provided and save it in a secure location. - -{{site.data.alerts.callout_info}} -The connection string is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`. -{{site.data.alerts.end}} - -
- -
- -{% include {{ page.version.version }}/setup/start-single-node-insecure.md %} - -
diff --git a/src/current/_includes/v21.2/setup/sample-setup-parameters-certs.md b/src/current/_includes/v21.2/setup/sample-setup-parameters-certs.md deleted file mode 100644 index 559838e988f..00000000000 --- a/src/current/_includes/v21.2/setup/sample-setup-parameters-certs.md +++ /dev/null @@ -1,35 +0,0 @@ - -
- - -
- -
- -### Create a free cluster - -{% include {{ page.version.version }}/setup/create-a-free-cluster.md %} - -### Create a SQL user - -{% include {{ page.version.version }}/setup/create-first-sql-user.md %} - -### Get the root certificate - -The **Connect to cluster** dialog shows information about how to connect to your cluster. - -1. Select **General connection string** from the **Select option** dropdown. -1. Open a new terminal on your local machine, and run the **CA Cert download command** provided in the **Download CA Cert** section. The client driver used in this tutorial requires this certificate to connect to CockroachDB {{ site.data.products.cloud }}. - -### Get the connection information - -1. Select **Parameters only** from the **Select option** dropdown. -1. Copy the connection information for each parameter displayed and save it in a secure location. - -
- -
- -{% include {{ page.version.version }}/setup/start-single-node-insecure.md %} - -
diff --git a/src/current/_includes/v21.2/setup/sample-setup-parameters.md b/src/current/_includes/v21.2/setup/sample-setup-parameters.md deleted file mode 100644 index d11f7fffad7..00000000000 --- a/src/current/_includes/v21.2/setup/sample-setup-parameters.md +++ /dev/null @@ -1,30 +0,0 @@ - -
- - -
- -
- -### Create a free cluster - -{% include {{ page.version.version }}/setup/create-a-free-cluster.md %} - -### Create a SQL user - -{% include {{ page.version.version }}/setup/create-first-sql-user.md %} - -### Get the connection information - -The **Connect to cluster** dialog shows information about how to connect to your cluster. - -1. Select **Parameters only** from the **Select option** dropdown. -1. Copy the connection information for each parameter displayed and save it in a secure location. - -
- -
- -{% include {{ page.version.version }}/setup/start-single-node-insecure.md %} - -
diff --git a/src/current/_includes/v21.2/setup/sample-setup.md b/src/current/_includes/v21.2/setup/sample-setup.md deleted file mode 100644 index 60b95082f2b..00000000000 --- a/src/current/_includes/v21.2/setup/sample-setup.md +++ /dev/null @@ -1,36 +0,0 @@ - -
- - -
- -
- -### Create a free cluster - -{% include {{ page.version.version }}/setup/create-a-free-cluster.md %} - -### Create a SQL user - -{% include {{ page.version.version }}/setup/create-first-sql-user.md %} - -### Get the connection string - -The **Connect to cluster** dialog shows information about how to connect to your cluster. - -1. Select **General connection string** from the **Select option** dropdown. -1. Open the **General connection string** section, then copy the connection string provided and save it in a secure location. - - The sample application used in this tutorial uses system CA certificates for server certificate verification, so you can skip the **Download CA Cert** instructions. - - {{site.data.alerts.callout_info}} - The connection string is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`. - {{site.data.alerts.end}} - -
- -
- -{% include {{ page.version.version }}/setup/start-single-node-insecure.md %} - -
\ No newline at end of file diff --git a/src/current/_includes/v21.2/setup/start-single-node-insecure.md b/src/current/_includes/v21.2/setup/start-single-node-insecure.md deleted file mode 100644 index 3807ba7208d..00000000000 --- a/src/current/_includes/v21.2/setup/start-single-node-insecure.md +++ /dev/null @@ -1,22 +0,0 @@ -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Run the [`cockroach start-single-node`](cockroach-start-single-node.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --advertise-addr 'localhost' --insecure - ~~~ - - This starts an insecure, single-node cluster. -1. Take note of the following connection information in the SQL shell welcome text: - - ~~~ - CockroachDB node starting at 2021-08-30 17:25:30.06524 +0000 UTC (took 4.3s) - build: CCL v21.1.6 @ 2021/07/20 15:33:43 (go1.15.11) - webui: http://localhost:8080 - sql: postgresql://root@localhost:26257?sslmode=disable - ~~~ - - You'll use the `sql` connection string to connect to the cluster later in this tutorial. - - -{% include {{ page.version.version }}/prod-deployment/insecure-flag.md %} \ No newline at end of file diff --git a/src/current/_includes/v21.2/sidebar-data/deploy.json b/src/current/_includes/v21.2/sidebar-data/deploy.json deleted file mode 100644 index 13135035e2c..00000000000 --- a/src/current/_includes/v21.2/sidebar-data/deploy.json +++ /dev/null @@ -1,342 +0,0 @@ -{ - "title": "Deploy", - "is_top_level": true, - "items": [ - { - "title": "Deployment Options", - "items": [ - { - "title": "CockroachDB Cloud", - "items": [ - { - "title": "Create an Account", - "urls": [ - "/cockroachcloud/create-an-account.html" - ] - }, - { - "title": "CockroachDB Serverless", - "items": [ - { - "title": "Create a CockroachDB Serverless (Basic) Cluster", - "urls": [ - "/cockroachcloud/create-a-basic-cluster.html" - ] - }, - { - "title": "Connect to Your Cluster", - "urls": [ - "/cockroachcloud/connect-to-a-basic-cluster.html" - ] - } - ] - }, - { - "title": "CockroachDB Dedicated", - "items": [ - { - "title": "Quickstart with CockroachDB Dedicated", - "urls": [ - "/cockroachcloud/quickstart-trial-cluster.html" - ] - }, - { - "title": "Create a CockroachDB Dedicated Cluster", - "urls": [ - "/cockroachcloud/create-your-cluster.html" - ] - }, - { - "title": "Connect to Your Cluster", - "urls": [ - "/cockroachcloud/connect-to-your-cluster.html" - ] - }, - { - "title": "Move into Production", - "urls": [ - "/cockroachcloud/production-checklist.html" - ] - } - ] - } - ] - }, - { - "title": "CockroachDB Self-Hosted", - "items": [ - { - "title": "Get Started", - "items": [ - { - "title": "Install CockroachDB", - "urls": [ - "/${VERSION}/install-cockroachdb.html", - "/${VERSION}/install-cockroachdb-mac.html", - "/${VERSION}/install-cockroachdb-linux.html", - "/${VERSION}/install-cockroachdb-windows.html" - ] - }, - { - "title": "Start a Local Cluster", - "items": [ - { - "title": "Start From Binary", - "urls": [ - "/${VERSION}/secure-a-cluster.html", - "/${VERSION}/start-a-local-cluster.html" - ] - }, - { - "title": "Start In Kubernetes", - "urls": [ - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes.html", - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes-insecure.html" - ] - }, - { - "title": "Start In Docker", - "urls": [ - "/${VERSION}/start-a-local-cluster-in-docker-mac.html", - "/${VERSION}/start-a-local-cluster-in-docker-linux.html", - "/${VERSION}/start-a-local-cluster-in-docker-windows.html" - ] - }, - { - "title": "Simulate a Multi-Region Cluster on localhost", - "urls": [ - "/${VERSION}/simulate-a-multi-region-cluster-on-localhost.html" - ] - } - ] - } - ] - }, - { - "title": "Production Checklist", - "urls": [ - "/${VERSION}/recommended-production-settings.html" - ] - }, - { - "title": "Kubernetes Deployment", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/kubernetes-overview.html" - ] - }, - { - "title": "Single-Cluster Deployment", - "urls": [ - "/${VERSION}/deploy-cockroachdb-with-kubernetes.html", - "/${VERSION}/deploy-cockroachdb-with-kubernetes-insecure.html" - ] - }, - { - "title": "OpenShift Deployment", - "urls": [ - "/${VERSION}/deploy-cockroachdb-with-kubernetes-openshift.html" - ] - }, - { - "title": "Multi-Cluster Deployment", - "urls": [ - "/${VERSION}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.html" - ] - } - ] - }, - { - "title": "Manual Deployment", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/manual-deployment.html" - ] - }, - { - "title": "On-Premises Deployment", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-premises.html", - "/${VERSION}/deploy-cockroachdb-on-premises-insecure.html" - ] - }, - { - "title": "Deploy on AWS", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-aws.html", - "/${VERSION}/deploy-cockroachdb-on-aws-insecure.html" - ] - }, - { - "title": "Deploy on Azure", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-microsoft-azure.html", - "/${VERSION}/deploy-cockroachdb-on-microsoft-azure-insecure.html" - ] - }, - { - "title": "Deploy on Digital Ocean", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-digital-ocean.html", - "/${VERSION}/deploy-cockroachdb-on-digital-ocean-insecure.html" - ] - }, - { - "title": "Deploy on Google Cloud Platform GCE", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-google-cloud-platform.html", - "/${VERSION}/deploy-cockroachdb-on-google-cloud-platform-insecure.html" - ] - } - ] - } - ] - } - ] - }, - { - "title": "Multi-Region Capabilities", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/multiregion-overview.html" - ] - }, - { - "title": "How to Choose a Multi-Region Configuration", - "urls": [ - "/${VERSION}/choosing-a-multi-region-configuration.html" - ] - }, - { - "title": "When to Use ZONE vs. REGION Survival Goals", - "urls": [ - "/${VERSION}/when-to-use-zone-vs-region-survival-goals.html" - ] - }, - { - "title": "When to Use REGIONAL vs. GLOBAL Tables", - "urls": [ - "/${VERSION}/when-to-use-regional-vs-global-tables.html" - ] - }, - { - "title": "Data Domiciling with CockroachDB", - "urls": [ - "/${VERSION}/data-domiciling.html" - ] - }, - { - "title": "Migrate to Multi-Region SQL", - "urls": [ - "/${VERSION}/migrate-to-multiregion-sql.html" - ] - }, - { - "title": "Table Partitioning", - "urls": [ - "/${VERSION}/partitioning.html" - ] - }, - { - "title": "Topology Patterns", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/topology-patterns.html" - ] - }, - { - "title": "Development", - "urls": [ - "/${VERSION}/topology-development.html" - ] - }, - { - "title": "Basic Production", - "urls": [ - "/${VERSION}/topology-basic-production.html" - ] - }, - { - "title": "Regional Tables", - "urls": [ - "/${VERSION}/regional-tables.html" - ] - }, - { - "title": "Global Tables", - "urls": [ - "/${VERSION}/global-tables.html" - ] - }, - { - "title": "Follower Reads", - "urls": [ - "/${VERSION}/topology-follower-reads.html" - ] - }, - { - "title": "Follow-the-Workload", - "urls": [ - "/${VERSION}/topology-follow-the-workload.html" - ] - } - ] - } - ] - }, - { - "title": "Explore CockroachDB Features", - "items": [{ - "title": "Replication & Rebalancing", - "urls": [ - "/${VERSION}/demo-replication-and-rebalancing.html" - ] - }, - { - "title": "Fault Tolerance & Recovery", - "urls": [ - "/${VERSION}/demo-fault-tolerance-and-recovery.html" - ] - }, - { - "title": "Multi-Region Performance", - "urls": [ - "/${VERSION}/demo-low-latency-multi-region-deployment.html" - ] - }, - { - "title": "Serializable Transactions", - "urls": [ - "/${VERSION}/demo-serializable.html" - ] - }, - { - "title": "Spatial Data", - "urls": [ - "/${VERSION}/spatial-tutorial.html" - ] - }, - { - "title": "Cross-Cloud Migration", - "urls": [ - "/${VERSION}/demo-automatic-cloud-migration.html" - ] - }, - { - "title": "JSON Support", - "urls": [ - "/${VERSION}/demo-json-support.html" - ] - } - ] - } - ] -} diff --git a/src/current/_includes/v21.2/sidebar-data/develop.json b/src/current/_includes/v21.2/sidebar-data/develop.json deleted file mode 100644 index f58f3e8c59b..00000000000 --- a/src/current/_includes/v21.2/sidebar-data/develop.json +++ /dev/null @@ -1,470 +0,0 @@ -{ - "title": "Develop", - "is_top_level": true, - "items": [ - { - "title": "Developer Guide Overview", - "urls": [ - "/${VERSION}/developer-guide-overview.html" - ] - }, - { - "title": "Connect to CockroachDB", - "items": [ - { - "title": "Install a driver or ORM", - "urls": [ - "/${VERSION}/install-client-drivers.html" - ] - }, - { - "title": "Connect to a Cluster", - "urls": [ - "/${VERSION}/connect-to-the-database.html" - ] - }, - { - "title": "Use Connection Pools", - "urls": [ - "/${VERSION}/connection-pooling.html" - ] - } - ] - }, - { - "title": "Design a Database Schema", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/schema-design-overview.html" - ] - }, - { - "title": "Create a Database", - "urls": [ - "/${VERSION}/schema-design-database.html" - ] - }, - { - "title": "Create a User-defined Schema", - "urls": [ - "/${VERSION}/schema-design-schema.html" - ] - }, - { - "title": "Create a Table", - "urls": [ - "/${VERSION}/schema-design-table.html" - ] - }, - { - "title": "Add Secondary Indexes", - "urls": [ - "/${VERSION}/schema-design-indexes.html" - ] - }, - { - "title": "Update a Database Schema", - "items": [ - { - "title": "Change and Remove Objects", - "urls": [ - "/${VERSION}/schema-design-update.html" - ] - }, - { - "title": "Online Schema Changes", - "urls": [ - "/${VERSION}/online-schema-changes.html" - ] - } - ] - }, - { - "title": "Advanced Schema Design", - "items": [ - { - "title": "Use Computed Columns", - "urls": [ - "/${VERSION}/computed-columns.html" - ] - }, - { - "title": "Group Columns into Families", - "urls": [ - "/${VERSION}/column-families.html" - ] - }, - { - "title": "Index a Subset of Rows", - "urls": [ - "/${VERSION}/partial-indexes.html" - ] - }, - { - "title": "Index Sequential Keys", - "urls": [ - "/${VERSION}/hash-sharded-indexes.html" - ] - }, - { - "title": "Index JSON and Array Data", - "urls": [ - "/${VERSION}/inverted-indexes.html" - ] - }, - { - "title": "Index Expressions", - "urls": [ - "/${VERSION}/expression-indexes.html" - ] - }, - { - "title": "Index Spatial Data", - "urls": [ - "/${VERSION}/spatial-indexes.html" - ] - }, - { - "title": "Scale to Multiple Regions", - "urls": [ - "/${VERSION}/multiregion-scale-application.html" - ] - } - ] - } - ] - }, - { - "title": "Write Data", - "items": [ - { - "title": "Insert Data", - "urls": [ - "/${VERSION}/insert-data.html" - ] - }, - { - "title": "Update Data", - "urls": [ - "/${VERSION}/update-data.html" - ] - }, - { - "title": "Bulk-update Data", - "urls": [ - "/${VERSION}/bulk-update-data.html" - ] - }, - { - "title": "Delete Data", - "urls": [ - "/${VERSION}/delete-data.html" - ] - }, - { - "title": "Bulk-delete Data", - "urls": [ - "/${VERSION}/bulk-delete-data.html" - ] - } - ] - }, - { - "title": "Read Data", - "items": [ - { - "title": "Select Rows of Data", - "urls": [ - "/${VERSION}/query-data.html" - ] - }, - { - "title": "Reuse Query Results", - "items": [ - { - "title": "Reusable Views", - "urls": [ - "/${VERSION}/views.html" - ] - }, - { - "title": "Subqueries", - "urls": [ - "/${VERSION}/subqueries.html" - ] - } - ] - }, - { - "title": "Create Temporary Tables", - "urls": [ - "/${VERSION}/temporary-tables.html" - ] - }, - { - "title": "Paginate Results", - "urls": [ - "/${VERSION}/pagination.html" - ] - }, - { - "title": "Read Historical Data", - "items": [ - { - "title": "AS OF SYSTEM TIME", - "urls": [ - "/${VERSION}/as-of-system-time.html" - ] - }, - { - "title": "Follower Reads", - "urls": [ - "/${VERSION}/follower-reads.html" - ] - } - ] - }, - { - "title": "Query Spatial Data", - "urls": [ - "/${VERSION}/query-spatial-data.html" - ] - } - ] - }, - { - "title": "Transactions", - "items": [ - { - "title": "Transactions Overview", - "urls": [ - "/${VERSION}/transactions.html" - ] - }, - { - "title": "Advanced Client-side Transaction Retries", - "urls": [ - "/${VERSION}/advanced-client-side-transaction-retries.html" - ] - } - ] - }, - { - "title": "Test Your Application Locally", - "urls": [ - "/${VERSION}/local-testing.html" - ] - }, - { - "title": "Troubleshoot Common Problems", - "urls": [ - "/${VERSION}/error-handling-and-troubleshooting.html" - ] - }, - { - "title": "Optimize Statement Performance", - "items": - [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/make-queries-fast.html" - ] - }, - { - "title": "Statement Tuning with EXPLAIN", - "urls": [ - "/${VERSION}/sql-tuning-with-explain.html" - ] - }, - { - "title": "Apply SQL Statement Performance Rules", - "urls": [ - "/${VERSION}/apply-statement-performance-rules.html" - ] - }, - { - "title": "SQL Performance Best Practices", - "urls": [ - "/${VERSION}/performance-best-practices-overview.html" - ] - }, - { - "title": "Performance Tuning Recipes", - "urls": [ - "/${VERSION}/performance-recipes.html" - ] - }, - { - "title": "Performance Features", - "items": - [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/performance-features-overview.html" - ] - }, - { - "title": "Indexes", - "urls": [ - "/${VERSION}/indexes.html" - ] - }, - { - "title": "Cost-Based Optimizer", - "urls": [ - "/${VERSION}/cost-based-optimizer.html" - ] - }, - { - "title": "Vectorized Execution Engine", - "urls": [ - "/${VERSION}/vectorized-execution.html" - ] - }, - { - "title": "Load-Based Splitting", - "urls": [ - "/${VERSION}/load-based-splitting.html" - ] - } - ] - } - ] - }, - { - "title": "Example Applications", - "items": [ - { - "title": "Overview of Example Applications", - "urls": [ - "/${VERSION}/example-apps.html" - ] - }, - { - "title": "Build the Roach Data Application using Spring Boot", - "items": [ - { - "title": "Spring Boot with JDBC", - "urls": [ - "/${VERSION}/build-a-spring-app-with-cockroachdb-jdbc.html" - ] - }, - { - "title": "Spring Boot with JPA", - "urls": [ - "/${VERSION}/build-a-spring-app-with-cockroachdb-jpa.html" - ] - } - ] - }, - { - "title": "The MovR Example Application", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/movr.html" - ] - }, - { - "title": "Global Application", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/movr-flask-overview.html" - ] - }, - { - "title": "Global Application Use Case", - "urls": [ - "/${VERSION}/movr-flask-use-case.html" - ] - }, - { - "title": "Multi-region Database Schema", - "urls": [ - "/${VERSION}/movr-flask-database.html" - ] - }, - { - "title": "Set up a Development Environment", - "urls": [ - "/${VERSION}/movr-flask-setup.html" - ] - }, - { - "title": "Develop a Global Application", - "urls": [ - "/${VERSION}/movr-flask-application.html" - ] - }, - { - "title": "Deploy a Global Application", - "urls": [ - "/${VERSION}/movr-flask-deployment.html" - ] - } - ] - } - ] - }, - { - "title": "Deploy a Python To-Do App with Flask, Kubernetes, and CockroachDB Cloud", - "urls": [ - "/cockroachcloud/deploy-a-python-to-do-app-with-flask-kubernetes-and-cockroachcloud.html" - ] - } - ] - }, - { - "title": "Tutorials", - "items": [ - { - "title": "Schema Migration Tools", - "items": [ - { - "title": "Alembic", - "urls": [ - "/${VERSION}/alembic.html" - ] - }, - { - "title": "Flyway", - "urls": [ - "/${VERSION}/flyway.html" - ] - }, - { - "title": "Liquibase", - "urls": [ - "/${VERSION}/liquibase.html" - ] - } - ] - }, - { - "title": "GUIs & IDEs", - "items": [ - { - "title": "DBeaver GUI", - "urls": [ - "/${VERSION}/dbeaver.html" - ] - }, - { - "title": "IntelliJ IDEA", - "urls": [ - "/${VERSION}/intellij-idea.html" - ] - } - ] - } - ] - } - ] -} diff --git a/src/current/_includes/v21.2/sidebar-data/get-started.json b/src/current/_includes/v21.2/sidebar-data/get-started.json deleted file mode 100644 index 2d622e9e005..00000000000 --- a/src/current/_includes/v21.2/sidebar-data/get-started.json +++ /dev/null @@ -1,167 +0,0 @@ -{ - "title": "Get Started", - "is_top_level": true, - "items": [{ - "title": "Quickstart", - "urls": [ - "/cockroachcloud/quickstart.html" - ] - }, - { - "title": "Learn CockroachDB SQL", - "urls": [ - "/cockroachcloud/learn-cockroachdb-sql.html", - "/${VERSION}/learn-cockroachdb-sql.html" - ] - }, - { - "title": "Build a Sample Application", - "items": [ - { - "title": "JavaScript/TypeScript", - "urls": [ - "/${VERSION}/build-a-nodejs-app-with-cockroachdb.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-sequelize.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-knexjs.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-prisma.html", - "/${VERSION}/build-a-typescript-app-with-cockroachdb.html" - ] - }, - { - "title": "Python", - "urls": [ - "/${VERSION}/build-a-python-app-with-cockroachdb-psycopg3.html", - "/${VERSION}/build-a-python-app-with-cockroachdb.html", - "/${VERSION}/build-a-python-app-with-cockroachdb-sqlalchemy.html", - "/${VERSION}/build-a-python-app-with-cockroachdb-django.html" - ] - }, - { - "title": "Go", - "urls": [ - "/${VERSION}/build-a-go-app-with-cockroachdb.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-gorm.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-pq.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-upperdb.html" - ] - }, - { - "title": "Java", - "urls": [ - "/${VERSION}/build-a-java-app-with-cockroachdb.html", - "/${VERSION}/build-a-java-app-with-cockroachdb-hibernate.html", - "/${VERSION}/build-a-java-app-with-cockroachdb-jooq.html", - "/${VERSION}/build-a-spring-app-with-cockroachdb-mybatis.html" - ] - }, - { - "title": "Ruby", - "urls": [ - "/${VERSION}/build-a-ruby-app-with-cockroachdb.html", - "/${VERSION}/build-a-ruby-app-with-cockroachdb-activerecord.html" - ] - }, - { - "title": "C# (.NET)", - "urls": [ - "/${VERSION}/build-a-csharp-app-with-cockroachdb.html" - ] - }, - { - "title": "Rust", - "urls": [ - "/${VERSION}/build-a-rust-app-with-cockroachdb.html" - ] - } - ] - }, - { - "title": "Build a Serverless Application", - "items": [ - { - "title": "AWS Lambda", - "urls": [ - "/${VERSION}/deploy-lambda-function.html" - ] - }, - { - "title": "Google Cloud Run", - "urls": [ - "/${VERSION}/deploy-app-gcr.html" - ] - }, - { - "title": "Netlify", - "urls": [ - "/${VERSION}/deploy-app-netlify.html" - ] - }, - { - "title": "Vercel", - "urls": [ - "/${VERSION}/deploy-app-vercel.html" - ] - }, - { - "title": "Serverless Function Best Practices", - "urls": [ - "/${VERSION}/serverless-function-best-practices.html" - ] - } - ] - }, - { - "title": "Glossary", - "urls": [ - "/${VERSION}/architecture/glossary.html" - ] - }, - { - "title": "FAQs", - "items": [ - { - "title": "CockroachDB FAQs", - "urls": [ - "/${VERSION}/frequently-asked-questions.html" - ] - }, - { - "title": "SQL FAQs", - "urls": [ - "/${VERSION}/sql-faqs.html" - ] - }, - { - "title": "Operational FAQs", - "urls": [ - "/${VERSION}/operational-faqs.html" - ] - }, - { - "title": "Availability FAQs", - "urls": [ - "/${VERSION}/multi-active-availability.html" - ] - }, - { - "title": "Licensing FAQs", - "urls": [ - "/${VERSION}/licensing-faqs.html" - ] - }, - { - "title": "Enterprise Features", - "urls": [ - "/${VERSION}/enterprise-licensing.html" - ] - }, - { - "title": "CockroachDB in Comparison", - "urls": [ - "/${VERSION}/cockroachdb-in-comparison.html" - ] - } - ] - } - ] -} diff --git a/src/current/_includes/v21.2/sidebar-data/manage.json b/src/current/_includes/v21.2/sidebar-data/manage.json deleted file mode 100644 index b2a79705747..00000000000 --- a/src/current/_includes/v21.2/sidebar-data/manage.json +++ /dev/null @@ -1,532 +0,0 @@ -{ - "title": "Manage", - "is_top_level": true, - "items": [ - { - "title": "Manage CockroachDB Cloud Clusters", - "items": [ - { - "title": "Plan Your Cluster", - "urls": [ - "/cockroachcloud/plan-your-cluster.html" - ] - }, - { - "title": "Manage a CockroachDB Serverless (Basic) Cluster", - "urls": [ - "/cockroachcloud/basic-cluster-management.html" - ] - }, - { - "title": "Manage a CockroachDB Dedicated Cluster", - "urls": [ - "/cockroachcloud/cluster-management.html" - ] - }, - { - "title": "Manage Billing", - "urls": [ - "/cockroachcloud/billing-management.html" - ] - }, - { - "title": "Use the Cloud API", - "urls": [ - "/cockroachcloud/cloud-api.html" - ] - }, - { - "title": "Use the ccloud command", - "urls": [ - "/cockroachcloud/ccloud-get-started.html" - ] - } - ] - }, - { - "title": "Operate CockroachDB on Kubernetes", - "items": [ - { - "title": "Pod Scheduling", - "urls": [ - "/${VERSION}/schedule-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Resource Management", - "urls": [ - "/${VERSION}/configure-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Certificate Management", - "urls": [ - "/${VERSION}/secure-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Cluster Scaling", - "urls": [ - "/${VERSION}/scale-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Cluster Monitoring", - "urls": [ - "/${VERSION}/monitor-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Cluster Upgrades", - "urls": [ - "/${VERSION}/upgrade-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Optimizing Performance", - "urls": [ - "/${VERSION}/kubernetes-performance.html" - ] - } - ] - }, - { - "title": "Back Up and Restore Data", - "items": [ - { - "title": "Back Up and Restore CockroachDB Cloud Clusters", - "items": [ - { - "title": "Take and Restore Customer-Owned Backups", - "urls": [ - "/cockroachcloud/take-and-restore-self-managed-backups.html" - ] - }, - { - "title": "Use Managed-Service Backups", - "urls": [ - "/cockroachcloud/managed-backups.html" - ] - } - ] - }, - { - "title": "Back Up and Restore CockroachDB Self-Hosted Clusters", - "items": [ - { - "title": "Full and Incremental Backups", - "urls": [ - "/${VERSION}/take-full-and-incremental-backups.html" - ] - }, - { - "title": "Backups with Revision History and Point-in-time Restore", - "urls": [ - "/${VERSION}/take-backups-with-revision-history-and-restore-from-a-point-in-time.html" - ] - }, - { - "title": "Encrypted Backup and Restore", - "urls": [ - "/${VERSION}/take-and-restore-encrypted-backups.html" - ] - }, - { - "title": "Locality-aware Backup and Restore", - "urls": [ - "/${VERSION}/take-and-restore-locality-aware-backups.html" - ] - }, - { - "title": "Scheduled Backups", - "urls": [ - "/${VERSION}/manage-a-backup-schedule.html" - ] - } - ] - }, - { - "title": "Restoring Backups Across Versions", - "urls": [ - "/${VERSION}/restoring-backups-across-versions.html" - ] - } - ] - }, - { - "title": "File Storage for Bulk Operations", - "items": [ - { - "title": "Cloud Storage", - "urls": [ - "/${VERSION}/use-cloud-storage-for-bulk-operations.html" - ] - }, - { - "title": "Userfile Storage", - "urls": [ - "/${VERSION}/use-userfile-for-bulk-operations.html" - ] - }, - { - "title": "Local File Server", - "urls": [ - "/${VERSION}/use-a-local-file-server-for-bulk-operations.html" - ] - } - ] - }, - { - "title": "Security", - "items": [ - { - "title": "Secure CockroachDB Cloud Clusters", - "items": [ - { - "title": "Authentication", - "items": [ - { - "title": "Authentication Overview", - "urls": [ - "/cockroachcloud/authentication.html" - ] - }, - { - "title": "Single Sign-On (SSO)", - "urls": [ - "/cockroachcloud/cloud-org-sso.html" - ] - }, - { - "title": "Configure Cloud Organization SSO", - "urls": [ - "/cockroachcloud/configure-cloud-org-sso.html" - ] - } - ] - }, - { - "title": "Configure SQL Authentication for Hardened Serverless Cluster Security", - "urls": [ - "/${VERSION}/security-reference/config-secure-hba.html" - ] - }, - { - "title": "Network Authorization", - "urls": [ - "/cockroachcloud/network-authorization.html" - ] - }, - { - "title": "SQL Audit Logging", - "urls": [ - "/cockroachcloud/sql-audit-logging.html" - ] - }, - { - "title": "Managing Access in CockroachDB Cloud", - "urls": [ - "/cockroachcloud/managing-access.html" - ] - } - ] - }, - { - "title": "Secure CockroachDB Self-Hosted Clusters", - "items": [ - { - "title": "Authentication", - "urls": [ - "/${VERSION}/authentication.html" - ] - }, - { - "title": "Encryption", - "urls": [ - "/${VERSION}/encryption.html" - ] - }, - { - "title": "Authorization", - "urls": [ - "/${VERSION}/authorization.html" - ] - }, - { - "title": "SQL Audit Logging", - "urls": [ - "/${VERSION}/sql-audit-logging.html" - ] - }, - { - "title": "GSSAPI Authentication", - "urls": [ - "/${VERSION}/gssapi_authentication.html" - ] - }, - { - "title": "Single Sign-on", - "urls": [ - "/${VERSION}/sso.html" - ] - }, - { - "title": "Rotate Security Certificates", - "urls": [ - "/${VERSION}/rotate-certificates.html" - ] - } - ] - } - ] - }, - { - "title": "Monitoring and Alerting", - "items": [ - { - "title": "Monitor a CockroachDB Cloud Cluster", - "items": [ - { - "title": "Cluster Overview Page", - "urls": [ - "/cockroachcloud/cluster-overview-page.html" - ] - }, - { - "title": "Alerts Page", - "urls": [ - "/cockroachcloud/alerts-page.html" - ] - }, - { - "title": "Tools Page", - "urls": [ - "/cockroachcloud/tools-page.html" - ] - }, - { - "title": "Statements Page", - "urls": [ - "/cockroachcloud/statements-page.html" - ] - }, - { - "title": "Sessions Page", - "urls": [ - "/cockroachcloud/sessions-page.html" - ] - }, - { - "title": "Transactions Page", - "urls": [ - "/cockroachcloud/transactions-page.html" - ] - } - ] - }, - { - "title": "Monitor a CockroachDB Self-Hosted Cluster", - "items": [ - { - "title": "Monitoring Clusters Overview", - "urls": [ - "/${VERSION}/monitoring-and-alerting.html" - ] - }, - { - "title": "Common Issues to Monitor", - "urls": [ - "/${VERSION}/common-issues-to-monitor.html" - ] - }, - { - "title": "Enable the Node Map", - "urls": [ - "/${VERSION}/enable-node-map.html" - ] - }, - { - "title": "Use Prometheus and Alertmanager", - "urls": [ - "/${VERSION}/monitor-cockroachdb-with-prometheus.html" - ] - }, - { - "title": "Cluster API", - "urls": [ - "/${VERSION}/cluster-api.html" - ] - } - ] - }, - { - "title": "Third-Party Monitoring Integrations", - "items": [ - { - "title": "Third-Party Monitoring Integration Overview", - "urls": [ - "/${VERSION}/third-party-monitoring-tools.html" - ] - }, - { - "title": "Monitor CockroachDB {{ site.data.products.core }} with Datadog", - "urls": [ - "/${VERSION}/datadog.html" - ] - }, - { - "title": "Monitor with DBmarlin", - "urls": [ - "/${VERSION}/dbmarlin.html" - ] - }, - { - "title": "Monitor with Kibana", - "urls": [ - "/${VERSION}/kibana.html" - ] - } - ] - } - ] - }, - { - "title": "Logging", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/logging-overview.html" - ] - }, - { - "title": "Configure Logs", - "urls": [ - "/${VERSION}/configure-logs.html" - ] - }, - { - "title": "Logging Use Cases", - "urls": [ - "/${VERSION}/logging-use-cases.html" - ] - } - ] - }, - { - "title": "Cluster Maintenance", - "items": [ - { - "title": "Upgrade a Cluster", - "items": [ - { - "title": "Uprade a CockroachDB Cloud Cluster", - "items": [ - { - "title": "Upgrade Policy", - "urls": [ - "/cockroachcloud/upgrade-policy.html" - ] - }, - { - "title": "Upgrade a cluster", - "urls": [ - "/cockroachcloud/upgrade-cockroach-version.html" - ] - } - ] - }, - { - "title": "Upgrade a CockroachDB Self-Hosted Cluster", - "items": [ - { - "title": "Upgrade to CockroachDB v21.2", - "urls": [ - "/${VERSION}/upgrade-cockroach-version.html" - ] - } - ] - } - ] - }, - { - "title": "Manage Long-Running Queries", - "urls": [ - "/${VERSION}/manage-long-running-queries.html" - ] - }, - { - "title": "Node Shutdown", - "urls": [ - "/${VERSION}/node-shutdown.html" - ] - }, - { - "title": "Disaster Recovery", - "urls": [ - "/${VERSION}/disaster-recovery.html" - ] - } - ] - }, - { - "title": "Replication Controls", - "urls": [ - "/${VERSION}/configure-replication-zones.html" - ] - }, - { - "title": "Troubleshooting", - "items": [ - { - "title": "Troubleshooting Overview", - "urls": [ - "/${VERSION}/troubleshooting-overview.html" - ] - }, - { - "title": "Common Errors and Solutions", - "urls": [ - "/${VERSION}/common-errors.html" - ] - }, - { - "title": "Troubleshoot Cluster Setup", - "urls": [ - "/${VERSION}/cluster-setup-troubleshooting.html" - ] - }, - { - "title": "Troubleshoot Statement Behavior", - "urls": [ - "/${VERSION}/query-behavior-troubleshooting.html" - ] - }, - { - "title": "Troubleshoot CockroachDB Cloud", - "urls": [ - "/cockroachcloud/troubleshooting-page.html" - ] - }, - { - "title": "Replication Reports", - "urls": [ - "/${VERSION}/query-replication-reports.html" - ] - }, - { - "title": "Support Resources", - "urls": [ - "/${VERSION}/support-resources.html" - ] - }, - { - "title": "File an Issue", - "urls": [ - "/${VERSION}/file-an-issue.html" - ] - } - ] - } - ] -} diff --git a/src/current/_includes/v21.2/sidebar-data/migrate.json b/src/current/_includes/v21.2/sidebar-data/migrate.json deleted file mode 100644 index 7332c7ba312..00000000000 --- a/src/current/_includes/v21.2/sidebar-data/migrate.json +++ /dev/null @@ -1,77 +0,0 @@ -{ - "title": "Migrate", - "is_top_level": true, - "items": [ - { - "title": "Migration Overview", - "urls": [ - "/${VERSION}/migration-overview.html" - ] - }, - { - "title": "Use the Schema Conversion Tool", - "urls": [ - "/cockroachcloud/migrations-page.html" - ] - }, - { - "title": "Migrate Data to CockroachDB", - "items": [ - { - "title": "Migrate data using AWS DMS", - "urls": [ - "/${VERSION}/aws-dms.html" - ] - }, - { - "title": "Migrate from CSV", - "urls": [ - "/${VERSION}/migrate-from-csv.html" - ] - }, - { - "title": "Migrate from Avro", - "urls": [ - "/${VERSION}/migrate-from-avro.html" - ] - }, - { - "title": "Migrate from Shapefiles", - "urls": [ - "/${VERSION}/migrate-from-shapefiles.html" - ] - }, - { - "title": "Migrate from OpenStreetMap", - "urls": [ - "/${VERSION}/migrate-from-openstreetmap.html" - ] - }, - { - "title": "Migrate from GeoJSON", - "urls": [ - "/${VERSION}/migrate-from-geojson.html" - ] - }, - { - "title": "Migrate from GeoPackage", - "urls": [ - "/${VERSION}/migrate-from-geopackage.html" - ] - }, - { - "title": "Import Performance Best Practices", - "urls": [ - "/${VERSION}/import-performance-best-practices.html" - ] - } - ] - }, - { - "title": "Export Spatial Data", - "urls": [ - "/${VERSION}/export-spatial-data.html" - ] - } - ] -} diff --git a/src/current/_includes/v21.2/sidebar-data/reference.json b/src/current/_includes/v21.2/sidebar-data/reference.json deleted file mode 100644 index 9de10775d12..00000000000 --- a/src/current/_includes/v21.2/sidebar-data/reference.json +++ /dev/null @@ -1,1758 +0,0 @@ -{ - "title": "Reference", - "is_top_level": true, - "items": [ - { - "title": "Architecture", - "items": [ - { - "title": "Architecture Overview", - "urls": [ - "/${VERSION}/architecture/overview.html" - ] - }, - { - "title": "SQL Layer", - "urls": [ - "/${VERSION}/architecture/sql-layer.html" - ] - }, - { - "title": "Transaction Layer", - "urls": [ - "/${VERSION}/architecture/transaction-layer.html" - ] - }, - { - "title": "Distribution Layer", - "urls": [ - "/${VERSION}/architecture/distribution-layer.html" - ] - }, - { - "title": "Replication Layer", - "urls": [ - "/${VERSION}/architecture/replication-layer.html" - ] - }, - { - "title": "Storage Layer", - "urls": [ - "/${VERSION}/architecture/storage-layer.html" - ] - }, - { - "title": "Life of a Distributed Transaction", - "urls": [ - "/${VERSION}/architecture/life-of-a-distributed-transaction.html" - ] - }, - { - "title": "Reads and Writes Overview", - "urls": [ - "/${VERSION}/architecture/reads-and-writes-overview.html" - ] - }, - { - "title": "Admission Control", - "urls": [ - "/${VERSION}/architecture/admission-control.html" - ] - } - ] - }, - { - "title": "SQL", - "items": [ - { - "title": "SQL Overview", - "urls": [ - "/${VERSION}/sql-feature-support.html" - ] - }, - { - "title": "PostgreSQL Compatibility", - "urls": [ - "/${VERSION}/postgresql-compatibility.html" - ] - }, - { - "title": "SQL Syntax", - "items": [ - { - "title": "Full SQL Grammar", - "urls": [ - "/${VERSION}/sql-grammar.html" - ] - }, - { - "title": "Keywords & Identifiers", - "urls": [ - "/${VERSION}/keywords-and-identifiers.html" - ] - }, - { - "title": "Constants", - "urls": [ - "/${VERSION}/sql-constants.html" - ] - }, - { - "title": "Selection Queries", - "urls": [ - "/${VERSION}/selection-queries.html" - ] - }, - { - "title": "Table Expressions", - "urls": [ - "/${VERSION}/table-expressions.html" - ] - }, - { - "title": "Common Table Expressions", - "urls": [ - "/${VERSION}/common-table-expressions.html" - ] - }, - { - "title": "Scalar Expressions", - "urls": [ - "/${VERSION}/scalar-expressions.html" - ] - }, - { - "title": "NULL Handling", - "urls": [ - "/${VERSION}/null-handling.html" - ] - } - ] - }, - { - "title": "SQL Statements", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/sql-statements.html" - ] - }, - { - "title": "ADD COLUMN", - "urls": [ - "/${VERSION}/add-column.html" - ] - }, - { - "title": "ADD CONSTRAINT", - "urls": [ - "/${VERSION}/add-constraint.html" - ] - }, - { - "title": "ADD REGION (Enterprise)", - "urls": [ - "/${VERSION}/add-region.html" - ] - }, - { - "title": "ALTER COLUMN", - "urls": [ - "/${VERSION}/alter-column.html" - ] - }, - { - "title": "ALTER DATABASE", - "urls": [ - "/${VERSION}/alter-database.html" - ] - }, - { - "title": "ALTER DEFAULT PRIVILEGES", - "urls": [ - "/${VERSION}/alter-default-privileges.html" - ] - }, - { - "title": "ALTER INDEX", - "urls": [ - "/${VERSION}/alter-index.html" - ] - }, - { - "title": "ALTER PARTITION (Enterprise)", - "urls": [ - "/${VERSION}/alter-partition.html" - ] - }, - { - "title": "ALTER PRIMARY KEY", - "urls": [ - "/${VERSION}/alter-primary-key.html" - ] - }, - { - "title": "ALTER RANGE", - "urls": [ - "/${VERSION}/alter-range.html" - ] - }, - { - "title": "ALTER ROLE", - "urls": [ - "/${VERSION}/alter-role.html" - ] - }, - { - "title": "ALTER SCHEMA", - "urls": [ - "/${VERSION}/alter-schema.html" - ] - }, - { - "title": "ALTER SEQUENCE", - "urls": [ - "/${VERSION}/alter-sequence.html" - ] - }, - { - "title": "ALTER TABLE", - "urls": [ - "/${VERSION}/alter-table.html" - ] - }, - { - "title": "ALTER TYPE", - "urls": [ - "/${VERSION}/alter-type.html" - ] - }, - { - "title": "ALTER USER", - "urls": [ - "/${VERSION}/alter-user.html" - ] - }, - { - "title": "ALTER VIEW", - "urls": [ - "/${VERSION}/alter-view.html" - ] - }, - { - "title": "EXPERIMENTAL_AUDIT", - "urls": [ - "/${VERSION}/experimental-audit.html" - ] - }, - { - "title": "BACKUP", - "urls": [ - "/${VERSION}/backup.html" - ] - }, - { - "title": "BEGIN", - "urls": [ - "/${VERSION}/begin-transaction.html" - ] - }, - { - "title": "CANCEL JOB", - "urls": [ - "/${VERSION}/cancel-job.html" - ] - }, - { - "title": "CANCEL QUERY", - "urls": [ - "/${VERSION}/cancel-query.html" - ] - }, - { - "title": "CANCEL SESSION", - "urls": [ - "/${VERSION}/cancel-session.html" - ] - }, - { - "title": "COMMENT ON", - "urls": [ - "/${VERSION}/comment-on.html" - ] - }, - { - "title": "COMMIT", - "urls": [ - "/${VERSION}/commit-transaction.html" - ] - }, - { - "title": "CONFIGURE ZONE", - "urls": [ - "/${VERSION}/configure-zone.html" - ] - }, - { - "title": "CONVERT TO SCHEMA", - "urls": [ - "/${VERSION}/convert-to-schema.html" - ] - }, - { - "title": "COPY FROM", - "urls": [ - "/${VERSION}/copy-from.html" - ] - }, - { - "title": "CREATE CHANGEFEED (Enterprise)", - "urls": [ - "/${VERSION}/create-changefeed.html" - ] - }, - { - "title": "CREATE DATABASE", - "urls": [ - "/${VERSION}/create-database.html" - ] - }, - { - "title": "CREATE INDEX", - "urls": [ - "/${VERSION}/create-index.html" - ] - }, - { - "title": "CREATE ROLE", - "urls": [ - "/${VERSION}/create-role.html" - ] - }, - { - "title": "CREATE SCHEDULE FOR BACKUP", - "urls": [ - "/${VERSION}/create-schedule-for-backup.html" - ] - }, - { - "title": "CREATE SCHEMA", - "urls": [ - "/${VERSION}/create-schema.html" - ] - }, - { - "title": "CREATE SEQUENCE", - "urls": [ - "/${VERSION}/create-sequence.html" - ] - }, - { - "title": "CREATE STATISTICS", - "urls": [ - "/${VERSION}/create-statistics.html" - ] - }, - { - "title": "CREATE TABLE", - "urls": [ - "/${VERSION}/create-table.html" - ] - }, - { - "title": "CREATE TABLE AS", - "urls": [ - "/${VERSION}/create-table-as.html" - ] - }, - { - "title": "CREATE TYPE", - "urls": [ - "/${VERSION}/create-type.html" - ] - }, - { - "title": "CREATE USER", - "urls": [ - "/${VERSION}/create-user.html" - ] - }, - { - "title": "CREATE VIEW", - "urls": [ - "/${VERSION}/create-view.html" - ] - }, - { - "title": "DELETE", - "urls": [ - "/${VERSION}/delete.html" - ] - }, - { - "title": "DROP COLUMN", - "urls": [ - "/${VERSION}/drop-column.html" - ] - }, - { - "title": "DROP CONSTRAINT", - "urls": [ - "/${VERSION}/drop-constraint.html" - ] - }, - { - "title": "DROP DATABASE", - "urls": [ - "/${VERSION}/drop-database.html" - ] - }, - { - "title": "DROP REGION (Enterprise)", - "urls": [ - "/${VERSION}/drop-region.html" - ] - }, - { - "title": "DROP TYPE", - "urls": [ - "/${VERSION}/drop-type.html" - ] - }, - { - "title": "DROP INDEX", - "urls": [ - "/${VERSION}/drop-index.html" - ] - }, - { - "title": "DROP ROLE", - "urls": [ - "/${VERSION}/drop-role.html" - ] - }, - { - "title": "DROP SCHEDULES", - "urls": [ - "/${VERSION}/drop-schedules.html" - ] - }, - { - "title": "DROP SCHEMA", - "urls": [ - "/${VERSION}/drop-schema.html" - ] - }, - { - "title": "DROP SEQUENCE", - "urls": [ - "/${VERSION}/drop-sequence.html" - ] - }, - { - "title": "DROP TABLE", - "urls": [ - "/${VERSION}/drop-table.html" - ] - }, - { - "title": "DROP USER", - "urls": [ - "/${VERSION}/drop-user.html" - ] - }, - { - "title": "DROP VIEW", - "urls": [ - "/${VERSION}/drop-view.html" - ] - }, - { - "title": "EXPERIMENTAL CHANGEFEED FOR", - "urls": [ - "/${VERSION}/changefeed-for.html" - ] - }, - { - "title": "EXPLAIN", - "urls": [ - "/${VERSION}/explain.html" - ] - }, - { - "title": "EXPLAIN ANALYZE", - "urls": [ - "/${VERSION}/explain-analyze.html" - ] - }, - { - "title": "EXPORT", - "urls": [ - "/${VERSION}/export.html" - ] - }, - { - "title": "GRANT", - "urls": [ - "/${VERSION}/grant.html" - ] - }, - { - "title": "IMPORT", - "urls": [ - "/${VERSION}/import.html" - ] - }, - { - "title": "IMPORT INTO", - "urls": [ - "/${VERSION}/import-into.html" - ] - }, - { - "title": "INSERT", - "urls": [ - "/${VERSION}/insert.html" - ] - }, - { - "title": "JOIN", - "urls": [ - "/${VERSION}/joins.html" - ] - }, - { - "title": "LIMIT/OFFSET", - "urls": [ - "/${VERSION}/limit-offset.html" - ] - }, - { - "title": "ORDER BY", - "urls": [ - "/${VERSION}/order-by.html" - ] - }, - { - "title": "OWNER TO", - "urls": [ - "/${VERSION}/owner-to.html" - ] - }, - { - "title": "PARTITION BY (Enterprise)", - "urls": [ - "/${VERSION}/partition-by.html" - ] - }, - { - "title": "PAUSE JOB", - "urls": [ - "/${VERSION}/pause-job.html" - ] - }, - { - "title": "PAUSE SCHEDULES", - "urls": [ - "/${VERSION}/pause-schedules.html" - ] - }, - { - "title": "PLACEMENT (RESTRICTED | DEFAULT)", - "urls": [ - "/${VERSION}/placement-restricted.html" - ] - }, - { - "title": "REASSIGN OWNED", - "urls": [ - "/${VERSION}/reassign-owned.html" - ] - }, - { - "title": "REFRESH", - "urls": [ - "/${VERSION}/refresh.html" - ] - }, - { - "title": "RENAME COLUMN", - "urls": [ - "/${VERSION}/rename-column.html" - ] - }, - { - "title": "RENAME CONSTRAINT", - "urls": [ - "/${VERSION}/rename-constraint.html" - ] - }, - { - "title": "RENAME DATABASE", - "urls": [ - "/${VERSION}/rename-database.html" - ] - }, - { - "title": "RENAME INDEX", - "urls": [ - "/${VERSION}/rename-index.html" - ] - }, - { - "title": "RENAME TABLE", - "urls": [ - "/${VERSION}/rename-table.html" - ] - }, - { - "title": "RELEASE SAVEPOINT", - "urls": [ - "/${VERSION}/release-savepoint.html" - ] - }, - { - "title": "RESET {session variable}", - "urls": [ - "/${VERSION}/reset-vars.html" - ] - }, - { - "title": "RESET CLUSTER SETTING", - "urls": [ - "/${VERSION}/reset-cluster-setting.html" - ] - }, - { - "title": "RESTORE", - "urls": [ - "/${VERSION}/restore.html" - ] - }, - { - "title": "RESUME JOB", - "urls": [ - "/${VERSION}/resume-job.html" - ] - }, - { - "title": "RESUME SCHEDULES", - "urls": [ - "/${VERSION}/resume-schedules.html" - ] - }, - { - "title": "REVOKE", - "urls": [ - "/${VERSION}/revoke.html" - ] - }, - { - "title": "ROLLBACK", - "urls": [ - "/${VERSION}/rollback-transaction.html" - ] - }, - { - "title": "SAVEPOINT", - "urls": [ - "/${VERSION}/savepoint.html" - ] - }, - { - "title": "SELECT", - "urls": [ - "/${VERSION}/select-clause.html" - ] - }, - { - "title": "SELECT FOR UPDATE", - "urls": [ - "/${VERSION}/select-for-update.html" - ] - }, - { - "title": "SET {session variable}", - "urls": [ - "/${VERSION}/set-vars.html" - ] - }, - { - "title": "SET CLUSTER SETTING", - "urls": [ - "/${VERSION}/set-cluster-setting.html" - ] - }, - { - "title": "SET LOCALITY", - "urls": [ - "/${VERSION}/set-locality.html" - ] - }, - { - "title": "SET PRIMARY REGION (Enterprise)", - "urls": [ - "/${VERSION}/set-primary-region.html" - ] - }, - { - "title": "SET SCHEMA", - "urls": [ - "/${VERSION}/set-schema.html" - ] - }, - { - "title": "SET TRANSACTION", - "urls": [ - "/${VERSION}/set-transaction.html" - ] - }, - { - "title": "SHOW {session variable}", - "urls": [ - "/${VERSION}/show-vars.html" - ] - }, - { - "title": "SHOW BACKUP", - "urls": [ - "/${VERSION}/show-backup.html" - ] - }, - { - "title": "SHOW CLUSTER SETTING", - "urls": [ - "/${VERSION}/show-cluster-setting.html" - ] - }, - { - "title": "SHOW COLUMNS", - "urls": [ - "/${VERSION}/show-columns.html" - ] - }, - { - "title": "SHOW CONSTRAINTS", - "urls": [ - "/${VERSION}/show-constraints.html" - ] - }, - { - "title": "SHOW CREATE", - "urls": [ - "/${VERSION}/show-create.html" - ] - }, - { - "title": "SHOW CREATE SCHEDULE", - "urls": [ - "/${VERSION}/show-create-schedule.html" - ] - }, - { - "title": "SHOW DATABASES", - "urls": [ - "/${VERSION}/show-databases.html" - ] - }, - { - "title": "SHOW DEFAULT PRIVILEGES", - "urls": [ - "/${VERSION}/show-default-privileges.html" - ] - }, - { - "title": "SHOW ENUMS", - "urls": [ - "/${VERSION}/show-enums.html" - ] - }, - { - "title": "SHOW FULL TABLE SCANS", - "urls": [ - "/${VERSION}/show-full-table-scans.html" - ] - }, - { - "title": "SHOW GRANTS", - "urls": [ - "/${VERSION}/show-grants.html" - ] - }, - { - "title": "SHOW INDEX", - "urls": [ - "/${VERSION}/show-index.html" - ] - }, - { - "title": "SHOW JOBS", - "urls": [ - "/${VERSION}/show-jobs.html" - ] - }, - { - "title": "SHOW LOCALITY", - "urls": [ - "/${VERSION}/show-locality.html" - ] - }, - { - "title": "SHOW PARTITIONS (Enterprise)", - "urls": [ - "/${VERSION}/show-partitions.html" - ] - }, - { - "title": "SHOW RANGES", - "urls": [ - "/${VERSION}/show-ranges.html" - ] - }, - { - "title": "SHOW RANGE FOR ROW", - "urls": [ - "/${VERSION}/show-range-for-row.html" - ] - }, - { - "title": "SHOW REGIONS", - "urls": [ - "/${VERSION}/show-regions.html" - ] - }, - { - "title": "SHOW ROLES", - "urls": [ - "/${VERSION}/show-roles.html" - ] - }, - { - "title": "SHOW SCHEDULES", - "urls": [ - "/${VERSION}/show-schedules.html" - ] - }, - { - "title": "SHOW SCHEMAS", - "urls": [ - "/${VERSION}/show-schemas.html" - ] - }, - { - "title": "SHOW SEQUENCES", - "urls": [ - "/${VERSION}/show-sequences.html" - ] - }, - { - "title": "SHOW SESSIONS", - "urls": [ - "/${VERSION}/show-sessions.html" - ] - }, - { - "title": "SHOW STATEMENTS", - "urls": [ - "/${VERSION}/show-statements.html" - ] - }, - { - "title": "SHOW STATISTICS", - "urls": [ - "/${VERSION}/show-statistics.html" - ] - }, - { - "title": "SHOW SAVEPOINT STATUS", - "urls": [ - "/${VERSION}/show-savepoint-status.html" - ] - }, - { - "title": "SHOW TABLES", - "urls": [ - "/${VERSION}/show-tables.html" - ] - }, - { - "title": "SHOW TRACE FOR SESSION", - "urls": [ - "/${VERSION}/show-trace.html" - ] - }, - { - "title": "SHOW TRANSACTIONS", - "urls": [ - "/${VERSION}/show-transactions.html" - ] - }, - { - "title": "SHOW TYPES", - "urls": [ - "/${VERSION}/show-types.html" - ] - }, - { - "title": "SHOW USERS", - "urls": [ - "/${VERSION}/show-users.html" - ] - }, - { - "title": "SHOW ZONE CONFIGURATIONS", - "urls": [ - "/${VERSION}/show-zone-configurations.html" - ] - }, - { - "title": "SPLIT AT", - "urls": [ - "/${VERSION}/split-at.html" - ] - }, - { - "title": "SURVIVE {ZONE,REGION} FAILURE", - "urls": [ - "/${VERSION}/survive-failure.html" - ] - }, - { - "title": "TRUNCATE", - "urls": [ - "/${VERSION}/truncate.html" - ] - }, - { - "title": "UNSPLIT AT", - "urls": [ - "/${VERSION}/unsplit-at.html" - ] - }, - { - "title": "UPDATE", - "urls": [ - "/${VERSION}/update.html" - ] - }, - { - "title": "UPSERT", - "urls": [ - "/${VERSION}/upsert.html" - ] - }, - { - "title": "VALIDATE CONSTRAINT", - "urls": [ - "/${VERSION}/validate-constraint.html" - ] - } - ] - }, - { - "title": "Data Types", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/data-types.html" - ] - }, - { - "title": "ARRAY", - "urls": [ - "/${VERSION}/array.html" - ] - }, - { - "title": "BIT", - "urls": [ - "/${VERSION}/bit.html" - ] - }, - { - "title": "BOOL", - "urls": [ - "/${VERSION}/bool.html" - ] - }, - { - "title": "BYTES", - "urls": [ - "/${VERSION}/bytes.html" - ] - }, - { - "title": "COLLATE", - "urls": [ - "/${VERSION}/collate.html" - ] - }, - { - "title": "DATE", - "urls": [ - "/${VERSION}/date.html" - ] - }, - { - "title": "DECIMAL", - "urls": [ - "/${VERSION}/decimal.html" - ] - }, - { - "title": "ENUM", - "urls": [ - "/${VERSION}/enum.html" - ] - }, - { - "title": "FLOAT", - "urls": [ - "/${VERSION}/float.html" - ] - }, - { - "title": "INET", - "urls": [ - "/${VERSION}/inet.html" - ] - }, - { - "title": "INT", - "urls": [ - "/${VERSION}/int.html" - ] - }, - { - "title": "INTERVAL", - "urls": [ - "/${VERSION}/interval.html" - ] - }, - { - "title": "JSONB", - "urls": [ - "/${VERSION}/jsonb.html" - ] - }, - { - "title": "SERIAL", - "urls": [ - "/${VERSION}/serial.html" - ] - }, - { - "title": "STRING", - "urls": [ - "/${VERSION}/string.html" - ] - }, - { - "title": "TIME", - "urls": [ - "/${VERSION}/time.html" - ] - }, - { - "title": "TIMESTAMP", - "urls": [ - "/${VERSION}/timestamp.html" - ] - }, - { - "title": "UUID", - "urls": [ - "/${VERSION}/uuid.html" - ] - } - ] - }, - { - "title": "Constraints", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/constraints.html" - ] - }, - { - "title": "Check", - "urls": [ - "/${VERSION}/check.html" - ] - }, - { - "title": "Default Value", - "urls": [ - "/${VERSION}/default-value.html" - ] - }, - { - "title": "Foreign Key", - "urls": [ - "/${VERSION}/foreign-key.html" - ] - }, - { - "title": "Not Null", - "urls": [ - "/${VERSION}/not-null.html" - ] - }, - { - "title": "Primary Key", - "urls": [ - "/${VERSION}/primary-key.html" - ] - }, - { - "title": "Unique", - "urls": [ - "/${VERSION}/unique.html" - ] - } - ] - }, - { - "title": "Functions and Operators", - "urls": [ - "/${VERSION}/functions-and-operators.html" - ] - }, - { - "title": "Window Functions", - "urls": [ - "/${VERSION}/window-functions.html" - ] - }, - { - "title": "Name Resolution", - "urls": [ - "/${VERSION}/sql-name-resolution.html" - ] - }, - { - "title": "System Catalogs", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/system-catalogs.html" - ] - }, - { - "title": "crdb_internal", - "urls": [ - "/${VERSION}/crdb-internal.html" - ] - }, - { - "title": "information_schema", - "urls": [ - "/${VERSION}/information-schema.html" - ] - }, - { - "title": "pg_catalog", - "urls": [ - "/${VERSION}/pg-catalog.html" - ] - }, - { - "title": "pg_extension", - "urls": [ - "/${VERSION}/pg-extension.html" - ] - } - ] - }, - { - "title": "Spatial Features", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/spatial-features.html" - ] - }, - { - "title": "Working with Spatial Data", - "urls": [ - "/${VERSION}/spatial-data.html" - ] - }, - { - "title": "Spatial and GIS Glossary", - "urls": [ - "/${VERSION}/spatial-glossary.html" - ] - }, - { - "title": "POINT", - "urls": [ - "/${VERSION}/point.html" - ] - }, - { - "title": "LINESTRING", - "urls": [ - "/${VERSION}/linestring.html" - ] - }, - { - "title": "POLYGON", - "urls": [ - "/${VERSION}/polygon.html" - ] - }, - { - "title": "MULTIPOINT", - "urls": [ - "/${VERSION}/multipoint.html" - ] - }, - { - "title": "MULTILINESTRING", - "urls": [ - "/${VERSION}/multilinestring.html" - ] - }, - { - "title": "MULTIPOLYGON", - "urls": [ - "/${VERSION}/multipolygon.html" - ] - }, - { - "title": "GEOMETRYCOLLECTION", - "urls": [ - "/${VERSION}/geometrycollection.html" - ] - }, - { - "title": "Well Known Text (WKT)", - "urls": [ - "/${VERSION}/well-known-text.html" - ] - }, - { - "title": "Well Known Binary (WKB)", - "urls": [ - "/${VERSION}/well-known-binary.html" - ] - }, - { - "title": "GeoJSON", - "urls": [ - "/${VERSION}/geojson.html" - ] - }, - { - "title": "SRID 4326 - longitude and latitude", - "urls": [ - "/${VERSION}/srid-4326.html" - ] - }, - { - "title": "ST_Contains", - "urls": [ - "/${VERSION}/st_contains.html" - ] - }, - { - "title": "ST_Within", - "urls": [ - "/${VERSION}/st_within.html" - ] - }, - { - "title": "ST_Intersects", - "urls": [ - "/${VERSION}/st_intersects.html" - ] - }, - { - "title": "ST_CoveredBy", - "urls": [ - "/${VERSION}/st_coveredby.html" - ] - }, - { - "title": "ST_Covers", - "urls": [ - "/${VERSION}/st_covers.html" - ] - }, - { - "title": "ST_Disjoint", - "urls": [ - "/${VERSION}/st_disjoint.html" - ] - }, - { - "title": "ST_Equals", - "urls": [ - "/${VERSION}/st_equals.html" - ] - }, - { - "title": "ST_Overlaps", - "urls": [ - "/${VERSION}/st_overlaps.html" - ] - }, - { - "title": "ST_Touches", - "urls": [ - "/${VERSION}/st_touches.html" - ] - }, - { - "title": "ST_ConvexHull", - "urls": [ - "/${VERSION}/st_convexhull.html" - ] - }, - { - "title": "ST_Union", - "urls": [ - "/${VERSION}/st_union.html" - ] - } - ] - }, - { - "title": "Experimental Features", - "urls": [ - "/${VERSION}/experimental-features.html" - ] - } - ] - }, - { - "title": "Cluster Settings", - "urls": [ - "/${VERSION}/cluster-settings.html" - ] - }, - { - "title": "Security", - "items": [ - { - "title": "Security Overview", - "urls": [ - "/${VERSION}/security-reference/security-overview.html" - ] - }, - { - "title": "Authentication", - "urls": [ - "/${VERSION}/security-reference/authentication.html" - ] - }, - { - "title": "Authorization", - "urls": [ - "/${VERSION}/security-reference/authorization.html" - ] - }, - { - "title": "Encryption", - "urls": [ - "/${VERSION}/security-reference/encryption.html" - ] - } - ] - }, - { - "title": "CLI", - "items": [ - { - "title": "Cockroach Commands", - "urls": [ - "/${VERSION}/cockroach-commands.html" - ] - }, - { - "title": "Client Connection Parameters", - "urls": [ - "/${VERSION}/connection-parameters.html" - ] - }, - { - "title": "cockroach Commands", - "items": [ - { - "title": "cockroach start", - "urls": [ - "/${VERSION}/cockroach-start.html" - ] - }, - { - "title": "cockroach init", - "urls": [ - "/${VERSION}/cockroach-init.html" - ] - }, - { - "title": "cockroach start-single-node", - "urls": [ - "/${VERSION}/cockroach-start-single-node.html" - ] - }, - { - "title": "cockroach cert", - "urls": [ - "/${VERSION}/cockroach-cert.html" - ] - }, - { - "title": "cockroach sql", - "urls": [ - "/${VERSION}/cockroach-sql.html" - ] - }, - { - "title": "cockroach sqlfmt", - "urls": [ - "/${VERSION}/cockroach-sqlfmt.html" - ] - }, - { - "title": "cockroach node", - "urls": [ - "/${VERSION}/cockroach-node.html" - ] - }, - { - "title": "cockroach nodelocal upload", - "urls": [ - "/${VERSION}/cockroach-nodelocal-upload.html" - ] - }, - { - "title": "cockroach auth-session", - "urls": [ - "/${VERSION}/cockroach-auth-session.html" - ] - }, - { - "title": "cockroach demo", - "urls": [ - "/${VERSION}/cockroach-demo.html" - ] - }, - { - "title": "cockroach debug ballast", - "urls": [ - "/${VERSION}/cockroach-debug-ballast.html" - ] - }, - { - "title": "cockroach debug encryption-active-key", - "urls": [ - "/${VERSION}/cockroach-debug-encryption-active-key.html" - ] - }, - { - "title": "cockroach debug job-trace", - "urls": [ - "/${VERSION}/cockroach-debug-job-trace.html" - ] - }, - { - "title": "cockroach debug list-files", - "urls": [ - "/${VERSION}/cockroach-debug-list-files.html" - ] - }, - { - "title": "cockroach debug merge-logs", - "urls": [ - "/${VERSION}/cockroach-debug-merge-logs.html" - ] - }, - { - "title": "cockroach debug tsdump", - "urls": [ - "/${VERSION}/cockroach-debug-tsdump.html" - ] - }, - { - "title": "cockroach debug zip", - "urls": [ - "/${VERSION}/cockroach-debug-zip.html" - ] - }, - { - "title": "cockroach statement-diag", - "urls": [ - "/${VERSION}/cockroach-statement-diag.html" - ] - }, - { - "title": "cockroach gen", - "urls": [ - "/${VERSION}/cockroach-gen.html" - ] - }, - { - "title": "cockroach userfile upload", - "urls": [ - "/${VERSION}/cockroach-userfile-upload.html" - ] - }, - { - "title": "cockroach userfile list", - "urls": [ - "/${VERSION}/cockroach-userfile-list.html" - ] - }, - { - "title": "cockroach userfile get", - "urls": [ - "/${VERSION}/cockroach-userfile-get.html" - ] - }, - { - "title": "cockroach userfile delete", - "urls": [ - "/${VERSION}/cockroach-userfile-delete.html" - ] - }, - { - "title": "cockroach version", - "urls": [ - "/${VERSION}/cockroach-version.html" - ] - }, - { - "title": "cockroach workload", - "urls": [ - "/${VERSION}/cockroach-workload.html" - ] - }, - { - "title": "cockroach import", - "urls": [ - "/${VERSION}/cockroach-import.html" - ] - } - ] - } - ] - }, - { - "title": "DB Console", - "items": [ - { - "title": "DB Console Overview", - "urls": [ - "/${VERSION}/ui-overview.html" - ] - }, - { - "title": "Cluster Overview Page", - "urls": [ - "/${VERSION}/ui-cluster-overview-page.html" - ] - }, - { - "title": "Metrics Dashboards", - "items": [ - { - "title": "Overview Dashboard", - "urls": [ - "/${VERSION}/ui-overview-dashboard.html" - ] - }, - { - "title": "Hardware Dashboard", - "urls": [ - "/${VERSION}/ui-hardware-dashboard.html" - ] - }, - { - "title": "Runtime Dashboard", - "urls": [ - "/${VERSION}/ui-runtime-dashboard.html" - ] - }, - { - "title": "SQL Dashboard", - "urls": [ - "/${VERSION}/ui-sql-dashboard.html" - ] - }, - { - "title": "Storage Dashboard", - "urls": [ - "/${VERSION}/ui-storage-dashboard.html" - ] - }, - { - "title": "Replication Dashboard", - "urls": [ - "/${VERSION}/ui-replication-dashboard.html" - ] - }, - { - "title": "Changefeeds Dashboard", - "urls": [ - "/${VERSION}/ui-cdc-dashboard.html" - ] - }, - { - "title": "Overload Dashboard", - "urls": [ - "/${VERSION}/ui-overload-dashboard.html" - ] - }, - { - "title": "Custom Chart", - "urls": [ - "/${VERSION}/ui-custom-chart-debug-page.html" - ] - } - ] - }, - { - "title": "Databases Page", - "urls": [ - "/${VERSION}/ui-databases-page.html" - ] - }, - { - "title": "Sessions Page", - "urls": [ - "/${VERSION}/ui-sessions-page.html" - ] - }, - { - "title": "Statements Page", - "urls": [ - "/${VERSION}/ui-statements-page.html" - ] - }, - { - "title": "Transactions Page", - "urls": [ - "/${VERSION}/ui-transactions-page.html" - ] - }, - { - "title": "Network Latency Page", - "urls": [ - "/${VERSION}/ui-network-latency-page.html" - ] - }, - { - "title": "Hot Ranges Page", - "urls": [ - "/${VERSION}/ui-hot-ranges-page.html" - ] - }, - { - "title": "Jobs Page", - "urls": [ - "/${VERSION}/ui-jobs-page.html" - ] - }, - { - "title": "Advanced Debug Page", - "urls": [ - "/${VERSION}/ui-debug-pages.html" - ] - } - ] - }, - { - "title": "Transaction Retry Error Reference", - "urls": [ - "/${VERSION}/transaction-retry-error-reference.html" - ] - }, - { - "title": "Cluster API", - "urls": [ - "https://www.cockroachlabs.com/docs/api/cluster/v2" - ] - }, - { - "title": "Cloud API", - "urls": [ - "https://www.cockroachlabs.com/docs/api/cloud/v1" - ] - }, - { - "title": "Logging", - "items": [ - { - "title": "Logging Levels and Channels", - "urls": [ - "/${VERSION}/logging.html" - ] - }, - { - "title": "Log Formats", - "urls": [ - "/${VERSION}/log-formats.html" - ] - }, - { - "title": "Notable Event Types", - "urls": [ - "/${VERSION}/eventlog.html" - ] - } - ] - }, - { - "title": "Diagnostics Reporting", - "urls": [ - "/${VERSION}/diagnostics-reporting.html" - ] - }, - { - "title": "Benchmarking", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/performance.html" - ] - }, - { - "title": "Benchmarking Instructions", - "urls": [ - "/${VERSION}/performance-benchmarking-with-tpcc-local.html", - "/${VERSION}/performance-benchmarking-with-tpcc-local-multiregion.html", - "/${VERSION}/performance-benchmarking-with-tpcc-small.html", - "/${VERSION}/performance-benchmarking-with-tpcc-medium.html", - "/${VERSION}/performance-benchmarking-with-tpcc-large.html" - ] - } - ] - }, - { - "title": "Third-Party Support", - "items": [ - { - "title": "Tools Supported by Cockroach Labs", - "urls": [ - "/${VERSION}/third-party-database-tools.html" - ] - }, - { - "title": "Tools Supported by the Community", - "urls": [ - "/${VERSION}/community-tooling.html" - ] - } - ] - } - ] - } diff --git a/src/current/_includes/v21.2/sidebar-data/releases.json b/src/current/_includes/v21.2/sidebar-data/releases.json deleted file mode 100644 index 18f2a1b7c6a..00000000000 --- a/src/current/_includes/v21.2/sidebar-data/releases.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "title": "Releases", - "is_top_level": true, - "items": [ - {% include_cached sidebar-releases.json %} - ] - } diff --git a/src/current/_includes/v21.2/sidebar-data/stream.json b/src/current/_includes/v21.2/sidebar-data/stream.json deleted file mode 100644 index fa555b7310c..00000000000 --- a/src/current/_includes/v21.2/sidebar-data/stream.json +++ /dev/null @@ -1,60 +0,0 @@ -{ - "title": "Stream Data", - "is_top_level": true, - "items": [ - { - "title": "Change Data Capture Overview", - "urls": [ - "/${VERSION}/change-data-capture-overview.html" - ] - }, - { - "title": "Use Changefeeds", - "urls": [ - "/${VERSION}/use-changefeeds.html" - ] - }, - { - "title": "Create and Configure Changefeeds", - "urls": [ - "/${VERSION}/create-and-configure-changefeeds.html" - ] - }, - { - "title": "Changefeed Sinks", - "urls": [ - "/${VERSION}/changefeed-sinks.html" - ] - }, - { - "title": "Changefeeds in Multi-Region Deployments", - "urls": [ - "/${VERSION}/changefeeds-in-multi-region-deployments.html" - ] - }, - { - "title": "Monitor and Debug Changefeeds", - "urls": [ - "/${VERSION}/monitor-and-debug-changefeeds.html" - ] - }, - { - "title": "Changefeed Examples", - "urls": [ - "/${VERSION}/changefeed-examples.html" - ] - }, - { - "title": "Stream a Changefeed from CockroachDB Cloud to Snowflake", - "urls": [ - "/cockroachcloud/stream-changefeed-to-snowflake-aws.html" - ] - }, - { - "title": "Advanced Changefeed Configuration", - "urls": [ - "/${VERSION}/advanced-changefeed-configuration.html" - ] - } - ] -} diff --git a/src/current/_includes/v21.2/spatial/ogr2ogr-supported-version.md b/src/current/_includes/v21.2/spatial/ogr2ogr-supported-version.md deleted file mode 100644 index ad444257227..00000000000 --- a/src/current/_includes/v21.2/spatial/ogr2ogr-supported-version.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -An `ogr2ogr` version of 3.1.0 or higher is required to generate data that can be imported into CockroachDB. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/spatial/zmcoords.md b/src/current/_includes/v21.2/spatial/zmcoords.md deleted file mode 100644 index fedbb74e703..00000000000 --- a/src/current/_includes/v21.2/spatial/zmcoords.md +++ /dev/null @@ -1,27 +0,0 @@ - You can also store a `{{page.title}}` with the following additional dimensions: - -- A third dimension coordinate `Z` (`{{page.title}}Z`). -- A measure coordinate `M` (`{{page.title}}M`). -- Both a third dimension and a measure coordinate (`{{page.title}}ZM`). - -The `Z` and `M` dimensions can be accessed or modified using a number of [built-in functions](functions-and-operators.html#spatial-functions), including: - -- `ST_Z` -- `ST_M` -- `ST_Affine` -- `ST_Zmflag` -- `ST_MakePoint` -- `ST_MakePointM` -- `ST_Force3D` -- `ST_Force3DZ` -- `ST_Force3DM` -- `ST_Force4D` -- `ST_Snap` -- `ST_SnapToGrid` -- `ST_RotateZ` -- `ST_AddMeasure` - -Note that CockroachDB's [spatial indexing](spatial-indexes.html) is still based on the 2D coordinate system. This means that: - -- The Z/M dimension is not index accelerated when using spatial predicates. -- Some spatial functions ignore the Z/M dimension, with transformations discarding the Z/M value. diff --git a/src/current/_includes/v21.2/sql/add-size-limits-to-indexed-columns.md b/src/current/_includes/v21.2/sql/add-size-limits-to-indexed-columns.md deleted file mode 100644 index 91cf3d61a1e..00000000000 --- a/src/current/_includes/v21.2/sql/add-size-limits-to-indexed-columns.md +++ /dev/null @@ -1,22 +0,0 @@ -We **strongly recommend** adding size limits to all [indexed columns](indexes.html), which includes columns in [primary keys](primary-key.html). - -Values exceeding 1 MiB can lead to [storage layer write amplification](architecture/storage-layer.html#write-amplification) and cause significant performance degradation or even [crashes due to OOMs (out of memory errors)](cluster-setup-troubleshooting.html#out-of-memory-oom-crash). - -To add a size limit using [`CREATE TABLE`](create-table.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE name (first STRING(100), last STRING(100)); -~~~ - -To add a size limit using [`ALTER TABLE ... ALTER COLUMN`](alter-column.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -SET enable_experimental_alter_column_type_general = true; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE name ALTER first TYPE STRING(99); -~~~ diff --git a/src/current/_includes/v21.2/sql/begin-transaction-as-of-system-time-example.md b/src/current/_includes/v21.2/sql/begin-transaction-as-of-system-time-example.md deleted file mode 100644 index 7f2c11dac77..00000000000 --- a/src/current/_includes/v21.2/sql/begin-transaction-as-of-system-time-example.md +++ /dev/null @@ -1,19 +0,0 @@ -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN AS OF SYSTEM TIME '2019-04-09 18:02:52.0+00:00'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM products; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ diff --git a/src/current/_includes/v21.2/sql/cannot-refresh-materialized-views-inside-transactions.md b/src/current/_includes/v21.2/sql/cannot-refresh-materialized-views-inside-transactions.md deleted file mode 100644 index f61bac51deb..00000000000 --- a/src/current/_includes/v21.2/sql/cannot-refresh-materialized-views-inside-transactions.md +++ /dev/null @@ -1,31 +0,0 @@ -CockroachDB cannot refresh {% if page.name == "views.md" %} materialized views {% else %} [materialized views](views.html#materialized-views) {% endif %} inside [explicit transactions](begin-transaction.html). Trying to refresh a materialized view inside an explicit transaction will result in an error, as shown below. - -1. First, start [`cockroach demo`](cockroach-demo.html) with the sample `bank` data set: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach demo bank - ~~~ - -2. Create the materialized view described in [Materialized views → Usage](views.html#usage). - -3. Start a new multi-statement transaction with [`BEGIN TRANSACTION`](begin-transaction.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - BEGIN TRANSACTION; - ~~~ - -4. Inside the open transaction, attempt to [refresh the view](refresh.html) as shown below. This will result in an error. - - {% include_cached copy-clipboard.html %} - ~~~ sql - REFRESH MATERIALIZED VIEW overdrawn_accounts; - ~~~ - - ~~~ - ERROR: cannot refresh view in an explicit transaction - SQLSTATE: 25000 - ~~~ - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/66008) diff --git a/src/current/_includes/v21.2/sql/combine-alter-table-commands.md b/src/current/_includes/v21.2/sql/combine-alter-table-commands.md deleted file mode 100644 index 62839cce017..00000000000 --- a/src/current/_includes/v21.2/sql/combine-alter-table-commands.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -This command can be combined with other `ALTER TABLE` commands in a single statement. For a list of commands that can be combined, see [`ALTER TABLE`](alter-table.html). For a demonstration, see [Add and rename columns atomically](rename-column.html#add-and-rename-columns-atomically). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/sql/connection-parameters.md b/src/current/_includes/v21.2/sql/connection-parameters.md deleted file mode 100644 index 9e0699b0614..00000000000 --- a/src/current/_includes/v21.2/sql/connection-parameters.md +++ /dev/null @@ -1,9 +0,0 @@ -Flag | Description ------|------------ -`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.

**Env Variable:** `COCKROACH_HOST`
**Default:** `localhost:26257` -`--port`

`-p` | The server port to connect to. Note: The port number can also be specified via `--host`.

**Env Variable:** `COCKROACH_PORT`
**Default:** `26257` -`--user`

`-u` | The [SQL user](create-user.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` -`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields. -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` - `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments. To convert a connection URL to the syntax that works with your client driver, run [`cockroach convert-url`](connection-parameters.html#convert-a-url-for-different-drivers).

**Env Variable:** `COCKROACH_URL`
**Default:** no URL \ No newline at end of file diff --git a/src/current/_includes/v21.2/sql/covering-index.md b/src/current/_includes/v21.2/sql/covering-index.md deleted file mode 100644 index 366d4500b2c..00000000000 --- a/src/current/_includes/v21.2/sql/covering-index.md +++ /dev/null @@ -1 +0,0 @@ -An index that stores all the columns needed by a query is also known as a _covering index_ for that query. When a query has a covering index, CockroachDB can use that index directly instead of doing a [join](joins.html) with the [primary key](primary-key.html), which is likely to be slower. diff --git a/src/current/_includes/v21.2/sql/crdb-internal-partitions-example.md b/src/current/_includes/v21.2/sql/crdb-internal-partitions-example.md deleted file mode 100644 index 680b0adf261..00000000000 --- a/src/current/_includes/v21.2/sql/crdb-internal-partitions-example.md +++ /dev/null @@ -1,43 +0,0 @@ -## Querying partitions programmatically - -The `crdb_internal.partitions` internal table contains information about the partitions in your database. In testing, scripting, and other programmatic environments, we recommend querying this table for partition information instead of using the `SHOW PARTITIONS` statement. For example, to get all `us_west` partitions of in your database, you can run the following query: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM crdb_internal.partitions WHERE name='us_west'; -~~~ - -~~~ - table_id | index_id | parent_name | name | columns | column_names | list_value | range_value | zone_id | subzone_id -+----------+----------+-------------+---------+---------+--------------+-------------------------------------------------+-------------+---------+------------+ - 53 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 0 | 0 - 54 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 54 | 1 - 54 | 2 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 54 | 2 - 55 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 1 - 55 | 2 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 2 - 55 | 3 | NULL | us_west | 1 | vehicle_city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 3 - 56 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 56 | 1 - 58 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 58 | 1 -(8 rows) -~~~ - -Other internal tables, like `crdb_internal.tables`, include information that could be useful in conjunction with `crdb_internal.partitions`. - -For example, if you want the output for your partitions to include the name of the table and database, you can perform a join of the two tables: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT - partitions.name AS partition_name, column_names, list_value, tables.name AS table_name, database_name - FROM crdb_internal.partitions JOIN crdb_internal.tables ON partitions.table_id=tables.table_id - WHERE tables.name='users'; -~~~ - -~~~ - partition_name | column_names | list_value | table_name | database_name -+----------------+--------------+-------------------------------------------------+------------+---------------+ - us_west | city | ('seattle'), ('san francisco'), ('los angeles') | users | movr - us_east | city | ('new york'), ('boston'), ('washington dc') | users | movr - europe_west | city | ('amsterdam'), ('paris'), ('rome') | users | movr -(3 rows) -~~~ diff --git a/src/current/_includes/v21.2/sql/crdb-internal-partitions.md b/src/current/_includes/v21.2/sql/crdb-internal-partitions.md deleted file mode 100644 index ebab5abe4ed..00000000000 --- a/src/current/_includes/v21.2/sql/crdb-internal-partitions.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -In testing, scripting, and other programmatic environments, we recommend querying the `crdb_internal.partitions` internal table for partition information instead of using the `SHOW PARTITIONS` statement. For more information, see [Querying partitions programmatically](show-partitions.html#querying-partitions-programmatically). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/sql/db-terms.md b/src/current/_includes/v21.2/sql/db-terms.md deleted file mode 100644 index e74ca554ad7..00000000000 --- a/src/current/_includes/v21.2/sql/db-terms.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -To avoid confusion with the general term "[database](https://en.wikipedia.org/wiki/Database)", throughout this guide we refer to the logical object as a *database*, to CockroachDB by name, and to a deployment of CockroachDB as a [*cluster*](architecture/overview.html#cockroachdb-architecture-terms). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/sql/dev-schema-change-limits.md b/src/current/_includes/v21.2/sql/dev-schema-change-limits.md deleted file mode 100644 index fbcac99f13c..00000000000 --- a/src/current/_includes/v21.2/sql/dev-schema-change-limits.md +++ /dev/null @@ -1,3 +0,0 @@ -Review the [limitations of online schema changes in CockroachDB](online-schema-changes.html#limitations). Note that CockroachDB has [limited support for schema changes within the same explicit transaction](online-schema-changes.html#limited-support-for-schema-changes-within-transactions). - - We recommend doing schema changes outside explicit transactions, where possible. When a database [schema management tool](third-party-database-tools.html#schema-migration-tools) manages transactions on your behalf, we recommend only including one schema change operation per transaction. \ No newline at end of file diff --git a/src/current/_includes/v21.2/sql/dev-schema-changes.md b/src/current/_includes/v21.2/sql/dev-schema-changes.md deleted file mode 100644 index d1f63bbcff5..00000000000 --- a/src/current/_includes/v21.2/sql/dev-schema-changes.md +++ /dev/null @@ -1 +0,0 @@ -As a general best practice, we discourage the use of client libraries to execute [database schema changes](online-schema-changes.html). Instead, use a database schema migration tool, or the [CockroachDB SQL client](cockroach-sql.html). \ No newline at end of file diff --git a/src/current/_includes/v21.2/sql/expression-indexes-cannot-reference-computed-columns.md b/src/current/_includes/v21.2/sql/expression-indexes-cannot-reference-computed-columns.md deleted file mode 100644 index 4c66aca7d8b..00000000000 --- a/src/current/_includes/v21.2/sql/expression-indexes-cannot-reference-computed-columns.md +++ /dev/null @@ -1,3 +0,0 @@ -CockroachDB does not allow {% if page.name == "expression-indexes.md" %} expression indexes {% else %} [expression indexes](expression-indexes.html) {% endif %} to reference [computed columns](computed-columns.html). - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67900) diff --git a/src/current/_includes/v21.2/sql/expressions-as-on-conflict-targets.md b/src/current/_includes/v21.2/sql/expressions-as-on-conflict-targets.md deleted file mode 100644 index 2b328c1e4f3..00000000000 --- a/src/current/_includes/v21.2/sql/expressions-as-on-conflict-targets.md +++ /dev/null @@ -1,40 +0,0 @@ -CockroachDB does not support expressions as `ON CONFLICT` targets. This means that unique {% if page.name == "expression-indexes.md" %} expression indexes {% else %} [expression indexes](expression-indexes.html) {% endif %} cannot be selected as arbiters for [`INSERT .. ON CONFLICT`](insert.html#on-conflict-clause) statements. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE t (a INT, b INT, UNIQUE INDEX ((a + b))); -~~~ - -~~~ -CREATE TABLE -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO NOTHING; -~~~ - -~~~ -invalid syntax: statement ignored: at or near "(": syntax error -SQLSTATE: 42601 -DETAIL: source SQL: -INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO NOTHING - ^ -HINT: try \h INSERT -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO UPDATE SET a = 10; -~~~ - -~~~ -invalid syntax: statement ignored: at or near "(": syntax error -SQLSTATE: 42601 -DETAIL: source SQL: -INSERT INTO t VALUES (1, 2) ON CONFLICT ((a + b)) DO UPDATE SET a = 10 - ^ -HINT: try \h INSERT -~~~ - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67893) diff --git a/src/current/_includes/v21.2/sql/function-special-forms.md b/src/current/_includes/v21.2/sql/function-special-forms.md deleted file mode 100644 index b9ac987444a..00000000000 --- a/src/current/_includes/v21.2/sql/function-special-forms.md +++ /dev/null @@ -1,29 +0,0 @@ -| Special form | Equivalent to | -|-----------------------------------------------------------|---------------------------------------------| -| `AT TIME ZONE` | `timezone()` | -| `CURRENT_CATALOG` | `current_catalog()` | -| `COLLATION FOR` | `pg_collation_for()` | -| `CURRENT_DATE` | `current_date()` | -| `CURRENT_ROLE` | `current_user()` | -| `CURRENT_SCHEMA` | `current_schema()` | -| `CURRENT_TIMESTAMP` | `current_timestamp()` | -| `CURRENT_TIME` | `current_time()` | -| `CURRENT_USER` | `current_user()` | -| `EXTRACT( FROM )` | `extract("", )` | -| `EXTRACT_DURATION( FROM )` | `extract_duration("", )` | -| `OVERLAY( PLACING FROM FOR )` | `overlay(, , , )` | -| `OVERLAY( PLACING FROM )` | `overlay(, , )` | -| `POSITION( IN )` | `strpos(, )` | -| `SESSION_USER` | `current_user()` | -| `SUBSTRING( FOR FROM )` | `substring(, , )` | -| `SUBSTRING( FOR )` | `substring(, 1, )` | -| `SUBSTRING( FROM FOR )` | `substring(, , )` | -| `SUBSTRING( FROM )` | `substring(, )` | -| `TRIM( FROM )` | `btrim(, )` | -| `TRIM(, )` | `btrim(, )` | -| `TRIM(FROM )` | `btrim()` | -| `TRIM(LEADING FROM )` | `ltrim(, )` | -| `TRIM(LEADING FROM )` | `ltrim()` | -| `TRIM(TRAILING FROM )` | `rtrim(, )` | -| `TRIM(TRAILING FROM )` | `rtrim()` | -| `USER` | `current_user()` | diff --git a/src/current/_includes/v21.2/sql/generated/diagrams/alter_table_partition_by.html b/src/current/_includes/v21.2/sql/generated/diagrams/alter_table_partition_by.html deleted file mode 100755 index 073c8794394..00000000000 --- a/src/current/_includes/v21.2/sql/generated/diagrams/alter_table_partition_by.html +++ /dev/null @@ -1,81 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - PARTITION - - - BY - - - LIST - - - ( - - - - name_list - - - - ) - - - ( - - - - list_partitions - - - - RANGE - - - ( - - - - name_list - - - - ) - - - ( - - - - range_partitions - - - - ) - - - NOTHING - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.2/sql/generated/diagrams/alter_user_password.html b/src/current/_includes/v21.2/sql/generated/diagrams/alter_user_password.html deleted file mode 100644 index 0e014933d1b..00000000000 --- a/src/current/_includes/v21.2/sql/generated/diagrams/alter_user_password.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - -ALTER - - -USER - - -IF - - -EXISTS - - -name - - -WITH - - -PASSWORD - - -password - - - -
diff --git a/src/current/_includes/v21.2/sql/generated/diagrams/create_user.html b/src/current/_includes/v21.2/sql/generated/diagrams/create_user.html deleted file mode 100644 index 1dc78bb289a..00000000000 --- a/src/current/_includes/v21.2/sql/generated/diagrams/create_user.html +++ /dev/null @@ -1,39 +0,0 @@ -
- - - - - - CREATE - - - USER - - - IF - - - NOT - - - EXISTS - - - - name - - - - WITH - - - PASSWORD - - - - password - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.2/sql/generated/diagrams/drop_user.html b/src/current/_includes/v21.2/sql/generated/diagrams/drop_user.html deleted file mode 100644 index 57c3db991b9..00000000000 --- a/src/current/_includes/v21.2/sql/generated/diagrams/drop_user.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - DROP - - - USER - - - IF - - - EXISTS - - - - user_name - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.2/sql/generated/diagrams/rename_column.html b/src/current/_includes/v21.2/sql/generated/diagrams/rename_column.html deleted file mode 100644 index 2d275bc9de7..00000000000 --- a/src/current/_includes/v21.2/sql/generated/diagrams/rename_column.html +++ /dev/null @@ -1,44 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - RENAME - - - COLUMN - - - - current_name - - - - TO - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.2/sql/generated/diagrams/show_full_scans.html b/src/current/_includes/v21.2/sql/generated/diagrams/show_full_scans.html deleted file mode 100644 index 6892f893296..00000000000 --- a/src/current/_includes/v21.2/sql/generated/diagrams/show_full_scans.html +++ /dev/null @@ -1,18 +0,0 @@ -
- - - - -SHOW - - -FULL - - -TABLE - - -SCANS - - -
diff --git a/src/current/_includes/v21.2/sql/global-table-description.md b/src/current/_includes/v21.2/sql/global-table-description.md deleted file mode 100644 index 5a6292d970b..00000000000 --- a/src/current/_includes/v21.2/sql/global-table-description.md +++ /dev/null @@ -1,7 +0,0 @@ -A _global_ table is optimized for low-latency reads from every region in the database. The tradeoff is that writes will incur higher latencies from any given region, since writes have to be replicated across every region to make the global low-latency reads possible. Use global tables when your application has a "read-mostly" table of reference data that is rarely updated, and needs to be available to all regions. - -For an example of a table that can benefit from the _global_ table locality setting in a multi-region deployment, see the `promo_codes` table from the [MovR application](movr.html). - -For instructions showing how to set a table's locality to `GLOBAL`, see [`ALTER TABLE ... SET LOCALITY`](set-locality.html#global). - -For more information about global tables, including troubleshooting information, see [Global Tables](global-tables.html). diff --git a/src/current/_includes/v21.2/sql/import-default-value.md b/src/current/_includes/v21.2/sql/import-default-value.md deleted file mode 100644 index 4a88ba003fb..00000000000 --- a/src/current/_includes/v21.2/sql/import-default-value.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -Column values cannot be generated by [`DEFAULT`](default-value.html) when importing; an import must include a value for every column specified in the `IMPORT` statement. To use `DEFAULT` values, your file must contain values for the column upon import, or you can [add the column](add-column.html) or [alter the column](alter-column.html#set-or-change-a-default-value) after the table has been imported. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/sql/import-into-default-value.md b/src/current/_includes/v21.2/sql/import-into-default-value.md deleted file mode 100644 index 8c23d6e3de4..00000000000 --- a/src/current/_includes/v21.2/sql/import-into-default-value.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -Column values cannot be generated by [`DEFAULT`](default-value.html) when importing; an import must include a value for every column specified in the `IMPORT INTO` statement. To use `DEFAULT` values, your file must contain values for the column upon import, or you can [add the column](add-column.html) or [alter the column](alter-column.html#set-or-change-a-default-value) after the table has been imported. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/sql/import-into-regional-by-row-table.md b/src/current/_includes/v21.2/sql/import-into-regional-by-row-table.md deleted file mode 100644 index abe9e16abe2..00000000000 --- a/src/current/_includes/v21.2/sql/import-into-regional-by-row-table.md +++ /dev/null @@ -1 +0,0 @@ -`IMPORT` and `IMPORT INTO` cannot directly import data to [`REGIONAL BY ROW`](set-locality.html#regional-by-row) tables that are part of [multi-region databases](multiregion-overview.html). For more information, including a workaround for this limitation, see [Known Limitations](known-limitations.html#import-into-a-regional-by-row-table). diff --git a/src/current/_includes/v21.2/sql/indexes-regional-by-row.md b/src/current/_includes/v21.2/sql/indexes-regional-by-row.md deleted file mode 100644 index 0304c0131d1..00000000000 --- a/src/current/_includes/v21.2/sql/indexes-regional-by-row.md +++ /dev/null @@ -1,3 +0,0 @@ - In [multi-region deployments](multiregion-overview.html), most users should use [`REGIONAL BY ROW` tables](multiregion-overview.html#regional-by-row-tables) instead of explicit index [partitioning](partitioning.html). When you add an index to a `REGIONAL BY ROW` table, it is automatically partitioned on the [`crdb_region` column](set-locality.html#crdb_region). Explicit index partitioning is not required. - -While CockroachDB process an [`ADD REGION`](add-region.html) or [`DROP REGION`](drop-region.html) statement on a particular database, creating or modifying an index will throw an error. Similarly, all [`ADD REGION`](add-region.html) and [`DROP REGION`](drop-region.html) statements will be blocked while an index is being modified on a `REGIONAL BY ROW` table within the same database. diff --git a/src/current/_includes/v21.2/sql/insert-vs-upsert.md b/src/current/_includes/v21.2/sql/insert-vs-upsert.md deleted file mode 100644 index cac251a6012..00000000000 --- a/src/current/_includes/v21.2/sql/insert-vs-upsert.md +++ /dev/null @@ -1,9 +0,0 @@ -When inserting or updating all columns of a table, and the table has no secondary -indexes, Cockroach Labs recommends using an `UPSERT` statement instead of the -equivalent `INSERT ON CONFLICT` statement. Whereas `INSERT ON CONFLICT` always -performs a read to determine the necessary writes, the `UPSERT` statement writes -without reading, making it faster. This may be particularly useful if -you are using a simple SQL table of two columns to [simulate direct KV access](sql-faqs.html#can-i-use-cockroachdb-as-a-key-value-store). -In this case, be sure to use the `UPSERT` statement. - -For tables with secondary indexes, there is no performance difference between `UPSERT` and `INSERT ON CONFLICT`. diff --git a/src/current/_includes/v21.2/sql/inverted-joins.md b/src/current/_includes/v21.2/sql/inverted-joins.md deleted file mode 100644 index 1f0c09ec64b..00000000000 --- a/src/current/_includes/v21.2/sql/inverted-joins.md +++ /dev/null @@ -1,102 +0,0 @@ -To run these examples, initialize a demo cluster with the MovR workload. - -{% include {{ page.version.version }}/demo_movr.md %} - -Create a GIN index on the `vehicles` table's `ext` column. - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE INVERTED INDEX idx_vehicle_details ON vehicles(ext); -~~~ - -Check the statement plan for a `SELECT` statement that uses an inner inverted join. - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT * FROM vehicles@primary AS v2 INNER INVERTED JOIN vehicles@idx_vehicle_details AS v1 ON v1.ext @> v2.ext; -~~~ - -~~~ - info -------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • lookup join - │ table: vehicles@primary - │ equality: (city, id) = (city,id) - │ equality cols are key - │ pred: ext @> ext - │ - └── • inverted join - │ table: vehicles@idx_vehicle_details - │ - └── • scan - estimated row count: 3,750 (100% of the table; stats collected 3 minutes ago) - table: vehicles@primary - spans: FULL SCAN -(16 rows) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -You can omit the `INNER INVERTED JOIN` statement by putting `v1.ext` on the left side of a `@>` join condition in a `WHERE` clause and using an index hint for the GIN index. - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT * FROM vehicles@idx_vehicle_details AS v1, vehicles AS v2 WHERE v1.ext @> v2.ext; -~~~ - -~~~ - info --------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • lookup join - │ table: vehicles@primary - │ equality: (city, id) = (city,id) - │ equality cols are key - │ pred: ext @> ext - │ - └── • inverted join - │ table: vehicles@idx_vehicle_details - │ - └── • scan - estimated row count: 3,750 (100% of the table; stats collected 12 minutes ago) - table: vehicles@primary - spans: FULL SCAN -(16 rows) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -Use the `LEFT INVERTED JOIN` hint to perform a left inverted join. - -~~~ sql -EXPLAIN SELECT * FROM vehicles AS v2 LEFT INVERTED JOIN vehicles AS v1 ON v1.ext @> v2.ext; -~~~ - -~~~ - info --------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • lookup join (left outer) - │ table: vehicles@primary - │ equality: (city, id) = (city,id) - │ equality cols are key - │ pred: ext @> ext - │ - └── • inverted join (left outer) - │ table: vehicles@idx_vehicle_details - │ - └── • scan - estimated row count: 3,750 (100% of the table; stats collected 16 minutes ago) - table: vehicles@primary - spans: FULL SCAN -(16 rows) - -Time: 2ms total (execution 2ms / network 0ms) -~~~ diff --git a/src/current/_includes/v21.2/sql/jsonb-comparison.md b/src/current/_includes/v21.2/sql/jsonb-comparison.md deleted file mode 100644 index 012a8226880..00000000000 --- a/src/current/_includes/v21.2/sql/jsonb-comparison.md +++ /dev/null @@ -1,13 +0,0 @@ -CockroachDB does not support using comparison operators (such as `<` or `>`) on [`JSONB`](jsonb.html) elements. For example, the following query does not work and returns an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT '{"a": 1}'::JSONB -> 'a' < '{"b": 2}'::JSONB -> 'b'; -~~~ - -~~~ -ERROR: unsupported comparison operator: < -SQLSTATE: 22023 -~~~ - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/49144) diff --git a/src/current/_includes/v21.2/sql/limit-row-size.md b/src/current/_includes/v21.2/sql/limit-row-size.md deleted file mode 100644 index 7a27b3bc979..00000000000 --- a/src/current/_includes/v21.2/sql/limit-row-size.md +++ /dev/null @@ -1,22 +0,0 @@ -## Limit the size of rows - -To help you avoid failures arising from misbehaving applications that bloat the size of rows, you can specify the behavior when a row or individual [column family](column-families.html) larger than a specified size is written to the database. Use the [cluster settings](cluster-settings.html) `sql.guardrails.max_row_size_log` to discover large rows and `sql.guardrails.max_row_size_err` to reject large rows. - -When you write a row that exceeds `sql.guardrails.max_row_size_log`: - -- `INSERT`, `UPSERT`, `UPDATE`, `CREATE TABLE AS`, `CREATE INDEX`, `ALTER TABLE`, `ALTER INDEX`, `IMPORT`, or `RESTORE` statements will log a `LargeRow` to the [`SQL_PERF`](logging.html#sql_perf) channel. -- `SELECT`, `DELETE`, `TRUNCATE`, and `DROP` are not affected. - -When you write a row that exceeds `sql.guardrails.max_row_size_err`: - -- `INSERT`, `UPSERT`, and `UPDATE` statements will fail with a code `54000 (program_limit_exceeded)` error. - -- `CREATE TABLE AS`, `CREATE INDEX`, `ALTER TABLE`, `ALTER INDEX`, `IMPORT`, and `RESTORE` statements will log a `LargeRowInternal` event to the [`SQL_INTERNAL_PERF`](logging.html#sql_internal_perf) channel. - -- `SELECT`, `DELETE`, `TRUNCATE`, and `DROP` are not affected. - -You **cannot** update existing rows that violate the limit unless the update shrinks the size of the -row below the limit. You **can** select, delete, alter, back up, and restore such rows. We -recommend using the accompanying setting `sql.guardrails.max_row_size_log` in conjunction with -`SELECT pg_column_size()` queries to detect and fix any existing large rows before lowering -`sql.guardrails.max_row_size_err`. diff --git a/src/current/_includes/v21.2/sql/locality-optimized-search-limited-records.md b/src/current/_includes/v21.2/sql/locality-optimized-search-limited-records.md deleted file mode 100644 index 74dac3e20f2..00000000000 --- a/src/current/_includes/v21.2/sql/locality-optimized-search-limited-records.md +++ /dev/null @@ -1 +0,0 @@ -- {% if page.name == "cost-based-optimizer.md" %} Locality optimized search {% else %} [Locality optimized search](cost-based-optimizer.html#locality-optimized-search-in-multi-region-clusters) {% endif %} works only for queries selecting a limited number of records (up to 100,000 unique keys). It does not work with [`LIMIT`](limit-offset.html) clauses. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/64862) diff --git a/src/current/_includes/v21.2/sql/locality-optimized-search-virtual-computed-columns.md b/src/current/_includes/v21.2/sql/locality-optimized-search-virtual-computed-columns.md deleted file mode 100644 index 361e422b8a1..00000000000 --- a/src/current/_includes/v21.2/sql/locality-optimized-search-virtual-computed-columns.md +++ /dev/null @@ -1 +0,0 @@ -- {% if page.name == "cost-based-optimizer.md" %} Locality optimized search {% else %} [Locality optimized search](cost-based-optimizer.html#locality-optimized-search-in-multi-region-clusters) {% endif %} does not work for queries that use [partitioned unique indexes](partitioning.html#partition-using-a-secondary-index) on [virtual computed columns](computed-columns.html#virtual-computed-columns). A workaround for computed columns is to make the virtual computed column a [stored computed column](computed-columns.html#stored-computed-columns). Locality optimized search does not work for queries that use partitioned unique [expression indexes](expression-indexes.html). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/68129) diff --git a/src/current/_includes/v21.2/sql/locality-optimized-search.md b/src/current/_includes/v21.2/sql/locality-optimized-search.md deleted file mode 100644 index 65b84152f44..00000000000 --- a/src/current/_includes/v21.2/sql/locality-optimized-search.md +++ /dev/null @@ -1 +0,0 @@ -Note that the [SQL engine](architecture/sql-layer.html) will avoid sending requests to nodes in other regions when it can instead read a value from a unique column that is stored locally. This capability is known as [_locality optimized search_](cost-based-optimizer.html#locality-optimized-search-in-multi-region-clusters). diff --git a/src/current/_includes/v21.2/sql/macos-terminal-configuration.md b/src/current/_includes/v21.2/sql/macos-terminal-configuration.md deleted file mode 100644 index 961dea18b0b..00000000000 --- a/src/current/_includes/v21.2/sql/macos-terminal-configuration.md +++ /dev/null @@ -1,14 +0,0 @@ -In **Apple Terminal**: - -1. Navigate to "Preferences", then "Profiles", then "Keyboard". -1. Enable the checkbox "Use Option as Meta Key". - -Apple Terminal Alt key configuration - -In **iTerm2**: - -1. Navigate to "Preferences", then "Profiles", then "Keys". -1. Select the radio button "Esc+" for the behavior of the Left Option Key. - -iTerm2 Alt key configuration - diff --git a/src/current/_includes/v21.2/sql/movr-start-nodes.md b/src/current/_includes/v21.2/sql/movr-start-nodes.md deleted file mode 100644 index 0311fd67ba2..00000000000 --- a/src/current/_includes/v21.2/sql/movr-start-nodes.md +++ /dev/null @@ -1,6 +0,0 @@ -Run [`cockroach demo`](cockroach-demo.html) with the [`--nodes`](cockroach-demo.html#flags) and [`--demo-locality`](cockroach-demo.html#flags) flags This command opens an interactive SQL shell to a temporary, multi-node in-memory cluster with the `movr` database preloaded and set as the [current database](sql-name-resolution.html#current-database). - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo --nodes=3 --demo-locality=region=us-east1:region=us-central1:region=us-west1 - ~~~ diff --git a/src/current/_includes/v21.2/sql/movr-start.md b/src/current/_includes/v21.2/sql/movr-start.md deleted file mode 100644 index 75b6b8edc67..00000000000 --- a/src/current/_includes/v21.2/sql/movr-start.md +++ /dev/null @@ -1,54 +0,0 @@ -- Run [`cockroach demo`](cockroach-demo.html) to start a temporary, in-memory cluster with the `movr` dataset preloaded: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo - ~~~ - -- Load the `movr` dataset into a persistent local cluster and open an interactive SQL shell: - 1. Start a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster. - 1. Use [`cockroach workload`](cockroach-workload.html) to load the `movr` dataset: - -
- - -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init movr 'postgresql://root@localhost:26257?sslcert=certs%2Fclient.root.crt&sslkey=certs%2Fclient.root.key&sslmode=verify-full&sslrootcert=certs%2Fca.crt' - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init movr 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ -
- 1. Use [`cockroach sql`](cockroach-sql.html) to open an interactive SQL shell and set `movr` as the [current database](sql-name-resolution.html#current-database): - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=localhost:26257 - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > USE movr; - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=localhost:26257 - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > USE movr; - ~~~ -
diff --git a/src/current/_includes/v21.2/sql/movr-statements-geo-partitioned-replicas.md b/src/current/_includes/v21.2/sql/movr-statements-geo-partitioned-replicas.md deleted file mode 100644 index b15c5c92aa7..00000000000 --- a/src/current/_includes/v21.2/sql/movr-statements-geo-partitioned-replicas.md +++ /dev/null @@ -1,10 +0,0 @@ -### Setup - -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along, run [`cockroach demo`](cockroach-demo.html) with the `--geo-partitioned-replicas` flag. This command opens an interactive SQL shell to a temporary, 9-node in-memory cluster with the `movr` database. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --geo-partitioned-replicas -~~~ diff --git a/src/current/_includes/v21.2/sql/movr-statements-nodes.md b/src/current/_includes/v21.2/sql/movr-statements-nodes.md deleted file mode 100644 index 4b9eddf612b..00000000000 --- a/src/current/_includes/v21.2/sql/movr-statements-nodes.md +++ /dev/null @@ -1,10 +0,0 @@ -### Setup - -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along, run [`cockroach demo`](cockroach-demo.html) with the [`--nodes`](cockroach-demo.html#flags) and [`--demo-locality`](cockroach-demo.html#flags) flags. This command opens an interactive SQL shell to a temporary, multi-node in-memory cluster with the `movr` database preloaded and set as the [current database](sql-name-resolution.html#current-database). - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --nodes=6 --demo-locality=region=us-east,zone=us-east-a:region=us-east,zone=us-east-b:region=us-central,zone=us-central-a:region=us-central,zone=us-central-b:region=us-west,zone=us-west-a:region=us-west,zone=us-west-b -~~~ diff --git a/src/current/_includes/v21.2/sql/movr-statements-partitioning.md b/src/current/_includes/v21.2/sql/movr-statements-partitioning.md deleted file mode 100644 index f45202c335c..00000000000 --- a/src/current/_includes/v21.2/sql/movr-statements-partitioning.md +++ /dev/null @@ -1,10 +0,0 @@ -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along with the examples below, open a new terminal and run [`cockroach demo`](cockroach-demo.html) with the [`--nodes`](cockroach-demo.html#flags) and [`--demo-locality`](cockroach-demo.html#flags) flags. This command opens an interactive SQL shell to a temporary, multi-node in-memory cluster with the `movr` database preloaded and set as the [current database](sql-name-resolution.html#current-database). - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo \ ---nodes=9 \ ---demo-locality=region=us-east1:region=us-east1:region=us-east1:region=us-central1:region=us-central1:region=us-central1:region=us-west1:region=us-west1:region=us-west1 -~~~ diff --git a/src/current/_includes/v21.2/sql/movr-statements.md b/src/current/_includes/v21.2/sql/movr-statements.md deleted file mode 100644 index f696756213a..00000000000 --- a/src/current/_includes/v21.2/sql/movr-statements.md +++ /dev/null @@ -1,10 +0,0 @@ -### Setup - -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along, run [`cockroach demo`](cockroach-demo.html) to start a temporary, in-memory cluster with the `movr` dataset preloaded: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo -~~~ diff --git a/src/current/_includes/v21.2/sql/multiregion-example-setup.md b/src/current/_includes/v21.2/sql/multiregion-example-setup.md deleted file mode 100644 index 3a122623d44..00000000000 --- a/src/current/_includes/v21.2/sql/multiregion-example-setup.md +++ /dev/null @@ -1,26 +0,0 @@ -### Setup - -Only a [cluster region](multiregion-overview.html#cluster-regions) specified [at node startup](cockroach-start.html#locality) can be used as a [database region](multiregion-overview.html#database-regions). - -To follow along with the examples below, start a [demo cluster](cockroach-demo.html) with the [`--global` flag](cockroach-demo.html#general) to simulate a multi-region cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --global --nodes 9 --no-example-database -~~~ - -To see the regions available to the databases in the cluster, use a `SHOW REGIONS FROM CLUSTER` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW REGIONS FROM CLUSTER; -~~~ - -~~~ - region | zones ----------------+---------- - europe-west1 | {b,c,d} - us-east1 | {b,c,d} - us-west1 | {a,b,c} -(3 rows) -~~~ diff --git a/src/current/_includes/v21.2/sql/multiregion-movr-global.md b/src/current/_includes/v21.2/sql/multiregion-movr-global.md deleted file mode 100644 index f0b958b4a5d..00000000000 --- a/src/current/_includes/v21.2/sql/multiregion-movr-global.md +++ /dev/null @@ -1,17 +0,0 @@ -Because the data in `promo_codes` is not updated frequently (a.k.a., "read-mostly"), and needs to be available from any region, the right table locality is [`GLOBAL`](multiregion-overview.html#global-tables). - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE promo_codes SET locality GLOBAL; -~~~ - -Next, alter the `user_promo_codes` table to have a foreign key into the global `promo_codes` table. This will enable fast reads of the `promo_codes.code` column from any region in the cluster. - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE user_promo_codes - ADD CONSTRAINT user_promo_codes_code_fk - FOREIGN KEY (code) - REFERENCES promo_codes (code) - ON UPDATE CASCADE; -~~~ diff --git a/src/current/_includes/v21.2/sql/multiregion-movr-regional-by-row.md b/src/current/_includes/v21.2/sql/multiregion-movr-regional-by-row.md deleted file mode 100644 index 70f13f3c10a..00000000000 --- a/src/current/_includes/v21.2/sql/multiregion-movr-regional-by-row.md +++ /dev/null @@ -1,103 +0,0 @@ -All of the tables except `promo_codes` contain rows which are partitioned by region, and updated very frequently. For these tables, the right table locality for optimizing access to their data is [`REGIONAL BY ROW`](multiregion-overview.html#regional-by-row-tables). - -Apply this table locality to the remaining tables. These statements use a `CASE` statement to put data for a given city in the right region and can take around 1 minute to complete for each table. - -- `rides` - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER TABLE rides ADD COLUMN region crdb_internal_region AS ( - CASE WHEN city = 'amsterdam' THEN 'europe-west1' - WHEN city = 'paris' THEN 'europe-west1' - WHEN city = 'rome' THEN 'europe-west1' - WHEN city = 'new york' THEN 'us-east1' - WHEN city = 'boston' THEN 'us-east1' - WHEN city = 'washington dc' THEN 'us-east1' - WHEN city = 'san francisco' THEN 'us-west1' - WHEN city = 'seattle' THEN 'us-west1' - WHEN city = 'los angeles' THEN 'us-west1' - END - ) STORED; - ALTER TABLE rides ALTER COLUMN REGION SET NOT NULL; - ALTER TABLE rides SET LOCALITY REGIONAL BY ROW AS "region"; - ~~~ - -- `user_promo_codes` - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER TABLE user_promo_codes ADD COLUMN region crdb_internal_region AS ( - CASE WHEN city = 'amsterdam' THEN 'europe-west1' - WHEN city = 'paris' THEN 'europe-west1' - WHEN city = 'rome' THEN 'europe-west1' - WHEN city = 'new york' THEN 'us-east1' - WHEN city = 'boston' THEN 'us-east1' - WHEN city = 'washington dc' THEN 'us-east1' - WHEN city = 'san francisco' THEN 'us-west1' - WHEN city = 'seattle' THEN 'us-west1' - WHEN city = 'los angeles' THEN 'us-west1' - END - ) STORED; - ALTER TABLE user_promo_codes ALTER COLUMN REGION SET NOT NULL; - ALTER TABLE user_promo_codes SET LOCALITY REGIONAL BY ROW AS "region"; - ~~~ - -- `users` - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER TABLE users ADD COLUMN region crdb_internal_region AS ( - CASE WHEN city = 'amsterdam' THEN 'europe-west1' - WHEN city = 'paris' THEN 'europe-west1' - WHEN city = 'rome' THEN 'europe-west1' - WHEN city = 'new york' THEN 'us-east1' - WHEN city = 'boston' THEN 'us-east1' - WHEN city = 'washington dc' THEN 'us-east1' - WHEN city = 'san francisco' THEN 'us-west1' - WHEN city = 'seattle' THEN 'us-west1' - WHEN city = 'los angeles' THEN 'us-west1' - END - ) STORED; - ALTER TABLE users ALTER COLUMN REGION SET NOT NULL; - ALTER TABLE users SET LOCALITY REGIONAL BY ROW AS "region"; - ~~~ - -- `vehicle_location_histories` - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER TABLE vehicle_location_histories ADD COLUMN region crdb_internal_region AS ( - CASE WHEN city = 'amsterdam' THEN 'europe-west1' - WHEN city = 'paris' THEN 'europe-west1' - WHEN city = 'rome' THEN 'europe-west1' - WHEN city = 'new york' THEN 'us-east1' - WHEN city = 'boston' THEN 'us-east1' - WHEN city = 'washington dc' THEN 'us-east1' - WHEN city = 'san francisco' THEN 'us-west1' - WHEN city = 'seattle' THEN 'us-west1' - WHEN city = 'los angeles' THEN 'us-west1' - END - ) STORED; - ALTER TABLE vehicle_location_histories ALTER COLUMN REGION SET NOT NULL; - ALTER TABLE vehicle_location_histories SET LOCALITY REGIONAL BY ROW AS "region"; - ~~~ - -- `vehicles` - - {% include_cached copy-clipboard.html %} - ~~~ sql - ALTER TABLE vehicles ADD COLUMN region crdb_internal_region AS ( - CASE WHEN city = 'amsterdam' THEN 'europe-west1' - WHEN city = 'paris' THEN 'europe-west1' - WHEN city = 'rome' THEN 'europe-west1' - WHEN city = 'new york' THEN 'us-east1' - WHEN city = 'boston' THEN 'us-east1' - WHEN city = 'washington dc' THEN 'us-east1' - WHEN city = 'san francisco' THEN 'us-west1' - WHEN city = 'seattle' THEN 'us-west1' - WHEN city = 'los angeles' THEN 'us-west1' - END - ) STORED; - ALTER TABLE vehicles ALTER COLUMN REGION SET NOT NULL; - ALTER TABLE vehicles SET LOCALITY REGIONAL BY ROW AS "region"; - ~~~ diff --git a/src/current/_includes/v21.2/sql/physical-plan-url.md b/src/current/_includes/v21.2/sql/physical-plan-url.md deleted file mode 100644 index 0e9109a8586..00000000000 --- a/src/current/_includes/v21.2/sql/physical-plan-url.md +++ /dev/null @@ -1 +0,0 @@ -The generated physical statement plan is encoded into a byte string after the [fragment identifier (`#`)](https://en.wikipedia.org/wiki/Fragment_identifier) in the generated URL. The fragment is not sent to the web server; instead, the browser waits for the web server to return a `decode.html` resource, and then JavaScript on the web page decodes the fragment into a physical statement plan diagram. The statement plan is, therefore, not logged by a server external to the CockroachDB cluster and not exposed to the public internet. diff --git a/src/current/_includes/v21.2/sql/preloaded-databases.md b/src/current/_includes/v21.2/sql/preloaded-databases.md deleted file mode 100644 index 3f1478c9b38..00000000000 --- a/src/current/_includes/v21.2/sql/preloaded-databases.md +++ /dev/null @@ -1,13 +0,0 @@ -New clusters and existing clusters [upgraded](upgrade-cockroach-version.html) to {{ page.version.version }} or later will include auto-generated databases, with the following purposes: - -- The empty `defaultdb` database is used if a client does not specify a database in the [connection parameters](connection-parameters.html). -- The `movr` database contains data about users, vehicles, and rides for the vehicle-sharing app [MovR](movr.html). -- The empty `postgres` database is provided for compatibility with PostgreSQL client applications that require it. -- The `startrek` database contains quotes from episodes. -- The `system` database contains CockroachDB metadata and is read-only. - -All databases except for the `system` database can be [deleted](drop-database.html) if they are not needed. - -{{site.data.alerts.callout_danger}} -Do not query the `system` database directly. Instead, use objects within the [system catalogs](system-catalogs.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/sql/privileges.md b/src/current/_includes/v21.2/sql/privileges.md deleted file mode 100644 index a98d194286d..00000000000 --- a/src/current/_includes/v21.2/sql/privileges.md +++ /dev/null @@ -1,13 +0,0 @@ -Privilege | Levels -----------|------------ -`ALL` | Database, Schema, Table, Type -`CREATE` | Database, Schema, Table -`DROP` | Database, Table -`GRANT` | Database, Schema, Table, Type -`CONNECT` | Database -`SELECT` | Table -`INSERT` | Table -`DELETE` | Table -`UPDATE` | Table -`USAGE` | Schema, Type -`ZONECONFIG` | Database, Table diff --git a/src/current/_includes/v21.2/sql/querying-partitions.md b/src/current/_includes/v21.2/sql/querying-partitions.md deleted file mode 100644 index bb2b9d6f09a..00000000000 --- a/src/current/_includes/v21.2/sql/querying-partitions.md +++ /dev/null @@ -1,163 +0,0 @@ -## Query partitions - -Similar to [indexes](indexes.html), partitions can improve query performance by limiting the numbers of rows that a query must scan. In the case of [geo-partitioned data](regional-tables.html), partitioning can limit a query scan to data in a specific region. - -### Filter on an indexed column - -If you filter the query of a partitioned table on a [column in the index directly following the partition prefix](indexes.html), the [cost-based optimizer](cost-based-optimizer.html) creates a query plan that scans each partition in parallel, rather than performing a costly sequential scan of the entire table. - -For example, suppose that the tables in the [`movr`](movr.html) database are geo-partitioned by region, and you want to query the `users` table for information about a specific user. - -Here is the `CREATE TABLE` statement for the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE users; -~~~ - -~~~ - table_name | create_statement -+------------+-------------------------------------------------------------------------------------+ - users | CREATE TABLE users ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | name VARCHAR NULL, - | address VARCHAR NULL, - | credit_card VARCHAR NULL, - | CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - | FAMILY "primary" (id, city, name, address, credit_card) - | ) PARTITION BY LIST (city) ( - | PARTITION us_west VALUES IN (('seattle'), ('san francisco'), ('los angeles')), - | PARTITION us_east VALUES IN (('new york'), ('boston'), ('washington dc')), - | PARTITION europe_west VALUES IN (('amsterdam'), ('paris'), ('rome')) - | ); - | ALTER PARTITION europe_west OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]'; - | ALTER PARTITION us_east OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=us-east1]'; - | ALTER PARTITION us_west OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' -(1 row) -~~~ - -If you know the user's id, you can filter on the `id` column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users WHERE id='00000000-0000-4000-8000-000000000000'; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+---------------+----------------------+-------------+ - 00000000-0000-4000-8000-000000000000 | new york | Robert Murphy | 99176 Anderson Mills | 8885705228 -(1 row) -~~~ - -An [`EXPLAIN`](explain.html) statement shows more detail about the cost-based optimizer's plan: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE id='00000000-0000-4000-8000-000000000000'; -~~~ - -~~~ - tree | field | description -+------+-------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | distributed | true - | vectorized | false - scan | | - | table | users@primary - | spans | -/"amsterdam" /"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"amsterdam\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"boston" /"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"boston\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"los angeles" /"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"los angeles\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"new york" /"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"new york\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"paris" /"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"paris\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"rome" /"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"rome\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"san francisco" /"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"san francisco\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"seattle" /"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"seattle\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"washington dc" /"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"washington dc\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"- - | filter | id = '00000000-0000-4000-8000-000000000000' -(6 rows) -~~~ - -Because the `id` column is in the primary index, directly after the partition prefix (`city`), the optimal query is constrained by the partitioned values. This means the query scans each partition in parallel for the unique `id` value. - -If you know the set of all possible partitioned values, adding a check constraint to the table's create statement can also improve performance. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users ADD CONSTRAINT check_city CHECK (city IN ('amsterdam','boston','los angeles','new york','paris','rome','san francisco','seattle','washington dc')); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE id='00000000-0000-4000-8000-000000000000'; -~~~ - -~~~ - tree | field | description -+------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | distributed | false - | vectorized | false - scan | | - | table | users@primary - | spans | /"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# - | parallel | -(6 rows) -~~~ - - -To see the performance improvement over a query that performs a full table scan, compare these queries to a query with a filter on a column that is not in the index. - -### Filter on a non-indexed column - -Suppose that you want to query the `users` table for information about a specific user, but you only know the user's name. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users WHERE name='Robert Murphy'; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+---------------+----------------------+-------------+ - 00000000-0000-4000-8000-000000000000 | new york | Robert Murphy | 99176 Anderson Mills | 8885705228 -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE name='Robert Murphy'; -~~~ - -~~~ - tree | field | description -+------+-------------+------------------------+ - | distributed | true - | vectorized | false - scan | | - | table | users@primary - | spans | ALL - | filter | name = 'Robert Murphy' -(6 rows) -~~~ - -The query returns the same result, but because `name` is not an indexed column, the query performs a full table scan that spans across all partition values. - -### Filter on a partitioned column - -If you know which partition contains the data that you are querying, using a filter (e.g., a [`WHERE` clause](select-clause.html#filter-rows)) on the column that is used for the partition can further improve performance by limiting the scan to the specific partition(s) that contain the data that you are querying. - -Now suppose that you know the user's name and location. You can query the table with a filter on the user's name and city: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE name='Robert Murphy' AND city='new york'; -~~~ - -~~~ - tree | field | description -+------+-------------+-----------------------------------+ - | distributed | true - | vectorized | false - scan | | - | table | users@primary - | spans | /"new york"-/"new york"/PrefixEnd - | filter | name = 'Robert Murphy' -(6 rows) -~~~ - -The table returns the same results as before, but at a much lower cost, as the query scan now spans just the `new york` partition value. diff --git a/src/current/_includes/v21.2/sql/regional-by-row-table-description.md b/src/current/_includes/v21.2/sql/regional-by-row-table-description.md deleted file mode 100644 index 9c083d478f6..00000000000 --- a/src/current/_includes/v21.2/sql/regional-by-row-table-description.md +++ /dev/null @@ -1,7 +0,0 @@ -In a _regional by row_ table, individual rows are optimized for access from different regions. This setting automatically divides a table and all of [its indexes](multiregion-overview.html#indexes-on-regional-by-row-tables) into [partitions](partitioning.html), with each partition optimized for access from a different region. Like [regional tables](multiregion-overview.html#regional-tables), _regional by row_ tables are optimized for access from a single region. However, that region is specified at the row level instead of applying to the whole table. - -Use regional by row tables when your application requires low-latency reads and writes at a row level where individual rows are primarily accessed from a single region. For example, a users table in a global application may need to keep some users' data in specific regions for better performance. - -For an example of a table that can benefit from the _regional by row_ setting in a multi-region deployment, see the `users` table from the [MovR application](movr.html). - -For instructions showing how to set a table's locality to `REGIONAL BY ROW`, see [`ALTER TABLE ... SET LOCALITY`](set-locality.html#regional-by-row). diff --git a/src/current/_includes/v21.2/sql/regional-table-description.md b/src/current/_includes/v21.2/sql/regional-table-description.md deleted file mode 100644 index c535391692c..00000000000 --- a/src/current/_includes/v21.2/sql/regional-table-description.md +++ /dev/null @@ -1,5 +0,0 @@ -In a _regional_ table, access to the table will be fast in the table's "home region" and slower in other regions. In other words, CockroachDB optimizes access to data in a regional table from a single region. By default, a regional table's home region is the [database's primary region](multiregion-overview.html#database-regions), but that can be changed to use any region in the database. Regional tables work well when your application requires low-latency reads and writes for an entire table from a single region. - -For instructions showing how to set a table's locality to `REGIONAL BY TABLE`, see [`ALTER TABLE ... SET LOCALITY`](set-locality.html#regional-by-table). - -By default, all tables in a multi-region database are _regional_ tables that use the database's primary region. Unless you know your application needs different performance characteristics than regional tables provide, there is no need to change this setting. diff --git a/src/current/_includes/v21.2/sql/replication-zone-patterns-to-multiregion-sql-mapping.md b/src/current/_includes/v21.2/sql/replication-zone-patterns-to-multiregion-sql-mapping.md deleted file mode 100644 index 4aa36cf2dec..00000000000 --- a/src/current/_includes/v21.2/sql/replication-zone-patterns-to-multiregion-sql-mapping.md +++ /dev/null @@ -1,5 +0,0 @@ -| Replication Zone Pattern | Multi-Region SQL | -|--------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [Duplicate indexes](../v20.2/topology-duplicate-indexes.html) | [`GLOBAL` tables](global-tables.html) | -| [Geo-partitioned replicas](../v20.2/topology-geo-partitioned-replicas.html) | [`REGIONAL BY ROW` tables](regional-tables.html#regional-by-row-tables) with [`ZONE` survival goals](multiregion-overview.html#surviving-zone-failures) | -| [Geo-partitioned leaseholders](../v20.2/topology-geo-partitioned-leaseholders.html) | [`REGIONAL BY ROW` tables](regional-tables.html#regional-by-row-tables) with [`REGION` survival goals](multiregion-overview.html#surviving-region-failures) | diff --git a/src/current/_includes/v21.2/sql/retry-savepoints.md b/src/current/_includes/v21.2/sql/retry-savepoints.md deleted file mode 100644 index 6b9e78209f0..00000000000 --- a/src/current/_includes/v21.2/sql/retry-savepoints.md +++ /dev/null @@ -1 +0,0 @@ -A savepoint defined with the name `cockroach_restart` is a "retry savepoint" and is used to implement [advanced client-side transaction retries](advanced-client-side-transaction-retries.html). For more information, see [Retry savepoints](advanced-client-side-transaction-retries.html#retry-savepoints). diff --git a/src/current/_includes/v21.2/sql/savepoint-ddl-rollbacks.md b/src/current/_includes/v21.2/sql/savepoint-ddl-rollbacks.md deleted file mode 100644 index 57da82ae775..00000000000 --- a/src/current/_includes/v21.2/sql/savepoint-ddl-rollbacks.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -Rollbacks to savepoints over [DDL](https://en.wikipedia.org/wiki/Data_definition_language) statements are only supported if you're rolling back to a savepoint created at the beginning of the transaction. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.2/sql/savepoints-and-high-priority-transactions.md b/src/current/_includes/v21.2/sql/savepoints-and-high-priority-transactions.md deleted file mode 100644 index 4b77f2dd561..00000000000 --- a/src/current/_includes/v21.2/sql/savepoints-and-high-priority-transactions.md +++ /dev/null @@ -1 +0,0 @@ -[`ROLLBACK TO SAVEPOINT`](rollback-transaction.html#rollback-a-nested-transaction) (for either regular savepoints or "restart savepoints" defined with `cockroach_restart`) causes a "feature not supported" error after a DDL statement in a [`HIGH PRIORITY` transaction](transactions.html#transaction-priorities), in order to avoid a transaction deadlock. For more information, see GitHub issue [#46414](https://www.github.com/cockroachdb/cockroach/issues/46414). diff --git a/src/current/_includes/v21.2/sql/savepoints-and-row-locks.md b/src/current/_includes/v21.2/sql/savepoints-and-row-locks.md deleted file mode 100644 index 0468c12fc4e..00000000000 --- a/src/current/_includes/v21.2/sql/savepoints-and-row-locks.md +++ /dev/null @@ -1,12 +0,0 @@ -CockroachDB supports exclusive row locks. - -- In PostgreSQL, row locks are released/cancelled upon [`ROLLBACK TO SAVEPOINT`][rts]. -- In CockroachDB, row locks are preserved upon [`ROLLBACK TO SAVEPOINT`][rts]. - -This is an architectural difference that may or may not be lifted in a later CockroachDB version. - -The code of client applications that rely on row locks must be reviewed and possibly modified to account for this difference. In particular, if an application is relying on [`ROLLBACK TO SAVEPOINT`][rts] to release row locks and allow a concurrent transaction touching the same rows to proceed, this behavior will not work with CockroachDB. - - - -[rts]: rollback-transaction.html diff --git a/src/current/_includes/v21.2/sql/schema-changes.md b/src/current/_includes/v21.2/sql/schema-changes.md deleted file mode 100644 index 04c49c2fbd2..00000000000 --- a/src/current/_includes/v21.2/sql/schema-changes.md +++ /dev/null @@ -1 +0,0 @@ -- Schema changes through [`ALTER TABLE`](alter-table.html), [`DROP DATABASE`](drop-database.html), [`DROP TABLE`](drop-table.html), and [`TRUNCATE`](truncate.html) \ No newline at end of file diff --git a/src/current/_includes/v21.2/sql/schema-terms.md b/src/current/_includes/v21.2/sql/schema-terms.md deleted file mode 100644 index d66ebd4058d..00000000000 --- a/src/current/_includes/v21.2/sql/schema-terms.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -To avoid confusion with the general term "[schema](https://en.wiktionary.org/wiki/schema)", in this guide we refer to the logical object as a *user-defined schema*, and to the relationship structure of logical objects in a cluster as a *database schema*. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.2/sql/select-for-update-overview.md b/src/current/_includes/v21.2/sql/select-for-update-overview.md deleted file mode 100644 index b367320f12e..00000000000 --- a/src/current/_includes/v21.2/sql/select-for-update-overview.md +++ /dev/null @@ -1,20 +0,0 @@ -The `SELECT FOR UPDATE` statement is used to order transactions by controlling concurrent access to one or more rows of a table. - -It works by locking the rows returned by a [selection query][selection], such that other transactions trying to access those rows are forced to wait for the transaction that locked the rows to finish. These other transactions are effectively put into a queue based on when they tried to read the value of the locked rows. - -Because this queueing happens during the read operation, the [thrashing](https://en.wikipedia.org/wiki/Thrashing_(computer_science)) that would otherwise occur if multiple concurrently executing transactions attempt to `SELECT` the same data and then `UPDATE` the results of that selection is prevented. By preventing thrashing, CockroachDB also prevents [transaction retries][retries] that would otherwise occur. - -As a result, using `SELECT FOR UPDATE` leads to increased throughput and decreased tail latency for contended operations. - -Note that using `SELECT FOR UPDATE` does not completely eliminate the chance of [serialization errors](transaction-retry-error-reference.html), which use the `SQLSTATE` error code `40001`, and emit error messages with the string `restart transaction`. These errors can also arise due to [time uncertainty](architecture/transaction-layer.html#transaction-conflicts). To eliminate the need for application-level retry logic, in addition to `SELECT FOR UPDATE` your application also needs to use a [driver that implements automatic retry handling](transactions.html#client-side-intervention). - -CockroachDB currently does not support the `FOR SHARE`/`FOR KEY SHARE` [locking strengths](select-for-update.html#locking-strengths), or the `SKIP LOCKED` [wait policy](select-for-update.html#wait-policies). - -{{site.data.alerts.callout_info}} -By default, CockroachDB uses the `SELECT FOR UPDATE` locking mechanism during the initial row scan performed in [`UPDATE`](update.html) and [`UPSERT`](upsert.html) statement execution. To turn off implicit `SELECT FOR UPDATE` locking for `UPDATE` and `UPSERT` statements, set `enable_implicit_select_for_update` to `false`. -{{site.data.alerts.end}} - - - -[retries]: transactions.html#transaction-retries -[selection]: selection-queries.html diff --git a/src/current/_includes/v21.2/sql/set-transaction-as-of-system-time-example.md b/src/current/_includes/v21.2/sql/set-transaction-as-of-system-time-example.md deleted file mode 100644 index 8e758f1c303..00000000000 --- a/src/current/_includes/v21.2/sql/set-transaction-as-of-system-time-example.md +++ /dev/null @@ -1,24 +0,0 @@ -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET TRANSACTION AS OF SYSTEM TIME '2019-04-09 18:02:52.0+00:00'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM products; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ diff --git a/src/current/_includes/v21.2/sql/shell-commands.md b/src/current/_includes/v21.2/sql/shell-commands.md deleted file mode 100644 index 8065fe5b126..00000000000 --- a/src/current/_includes/v21.2/sql/shell-commands.md +++ /dev/null @@ -1,24 +0,0 @@ -The following commands can be used within the interactive SQL shell: - -Command | Usage ---------|------------ -`\?`,`help` | View this help within the shell. -`\q`,`quit`,`exit`,`ctrl-d` | Exit the shell.
When no text follows the prompt, `ctrl-c` exits the shell as well; otherwise, `ctrl-c` clears the line. -`\!` | Run an external command and print its results to `stdout`. [See an example](cockroach-sql.html#run-external-commands-from-the-sql-shell). -\| | Run the output of an external command as SQL statements. [See an example](cockroach-sql.html#run-external-commands-from-the-sql-shell). -`\set